From patchwork Mon Mar 23 13:13:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222050 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86664C3F2CD for ; Mon, 23 Mar 2020 13:14:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5DD2D2072E for ; Mon, 23 Mar 2020 13:14:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="iCXzHIZE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728369AbgCWNOr (ORCPT ); Mon, 23 Mar 2020 09:14:47 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:39170 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728240AbgCWNOr (ORCPT ); Mon, 23 Mar 2020 09:14:47 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02ND6LqH019107; Mon, 23 Mar 2020 06:14:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=ka7qQC9pt8ZzgHK5B8jw2WZOZJLzsdPJfQXFRfu8tbA=; b=iCXzHIZEOXg/uhnwe6LwP0kYPFhuCLu6hkUQNBqeYFqYPwlnztx9QzJfx2XFySqDgs9f R2WuiciwasrgyKQ7jzz3Awna9gOKbEMxABYeWzLZY2guV6VUUZ7ePHmNqwOsbd1cG2zC AwvSeVFHOeMRmT0N1DobA+jVIf3h1FSNcantDqBOU2mCvB78xx+9xN0+ApBwsluHvMNG Lky+yV/QEl6MzUuznkTaRF7KoGlHWdh/4oNm5IdkXv2Ea/+5dTRVZjXmhj7ciFh0tnCC J19QjlDHE2hyFcSAotE87EGudqR52DzjJYiw28CxoNAz59keT9LrleEHDok8NJ5hoc7a PA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2ywvkqmn3e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 23 Mar 2020 06:14:44 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 23 Mar 2020 06:14:42 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 23 Mar 2020 06:14:42 -0700 Received: from localhost.localdomain (unknown [10.9.16.91]) by maili.marvell.com (Postfix) with ESMTP id C926B3F7041; Mon, 23 Mar 2020 06:14:40 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Igor Russkikh" Subject: [PATCH net-next 01/17] net: introduce the MACSEC netdev feature Date: Mon, 23 Mar 2020 16:13:32 +0300 Message-ID: <20200323131348.340-2-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200323131348.340-1-irusskikh@marvell.com> References: <20200323131348.340-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-23_04:2020-03-21,2020-03-23 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Antoine Tenart This patch introduce a new netdev feature, which will be used by drivers to state they can perform MACsec transformations in hardware. Signed-off-by: Antoine Tenart Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- include/linux/netdev_features.h | 3 +++ net/ethtool/common.c | 1 + 2 files changed, 4 insertions(+) diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h index 34d050bb1ae6..9d53c5ad272c 100644 --- a/include/linux/netdev_features.h +++ b/include/linux/netdev_features.h @@ -83,6 +83,8 @@ enum { NETIF_F_HW_TLS_RECORD_BIT, /* Offload TLS record */ NETIF_F_GRO_FRAGLIST_BIT, /* Fraglist GRO */ + NETIF_F_HW_MACSEC_BIT, /* Offload MACsec operations */ + /* * Add your fresh new feature above and remember to update * netdev_features_strings[] in net/core/ethtool.c and maybe @@ -154,6 +156,7 @@ enum { #define NETIF_F_HW_TLS_RX __NETIF_F(HW_TLS_RX) #define NETIF_F_GRO_FRAGLIST __NETIF_F(GRO_FRAGLIST) #define NETIF_F_GSO_FRAGLIST __NETIF_F(GSO_FRAGLIST) +#define NETIF_F_HW_MACSEC __NETIF_F(HW_MACSEC) /* Finds the next feature with the highest number of the range of start till 0. */ diff --git a/net/ethtool/common.c b/net/ethtool/common.c index dab047eec943..51a0941fc62f 100644 --- a/net/ethtool/common.c +++ b/net/ethtool/common.c @@ -60,6 +60,7 @@ const char netdev_features_strings[NETDEV_FEATURE_COUNT][ETH_GSTRING_LEN] = { [NETIF_F_HW_TLS_TX_BIT] = "tls-hw-tx-offload", [NETIF_F_HW_TLS_RX_BIT] = "tls-hw-rx-offload", [NETIF_F_GRO_FRAGLIST_BIT] = "rx-gro-list", + [NETIF_F_HW_MACSEC_BIT] = "macsec-hw-offload", }; const char From patchwork Mon Mar 23 13:13:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F3DDC3F2CD for ; Mon, 23 Mar 2020 13:14:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id ECB2420722 for ; Mon, 23 Mar 2020 13:14:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="k9GIxP24" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728382AbgCWNOv (ORCPT ); Mon, 23 Mar 2020 09:14:51 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:58436 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728240AbgCWNOu (ORCPT ); Mon, 23 Mar 2020 09:14:50 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02ND6LZM019104; Mon, 23 Mar 2020 06:14:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=9fs6nDV5mQA+2KJYT5Mllt00nd4z3LNGfsgoCjG2Wao=; b=k9GIxP24NInvS6vphgFUUJbD8ufazj1HG8MhdvBIK+8Iead32DF66arzTjZf6IogYMTX S5uqxwfp+qcXq4Fd9vu/M6CzqNvkac0fI8pJpS5tlhWHRxMheSOY3HkWihMcJKwMECpG j99VyD6Y43JDvFWmADBuZVph6kUk2jOHvMY5hzeNFBuqbbPgD14f8yf8igfNtKP+VYP4 8I/QBdDOMZIvlWYRW7rUg+GzWHLBUjMhLn4wubzx7d+x42cLLFKIV/G9Da6Fhc7NDHFY qZzqL7b+fOqGrpAj7gGm/EU8OuzozHnYmH7IoIixK6KbAotkuznTrqYdAMWep1wilEel ig== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2ywvkqmn3n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 23 Mar 2020 06:14:48 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 23 Mar 2020 06:14:46 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 23 Mar 2020 06:14:46 -0700 Received: from localhost.localdomain (unknown [10.9.16.91]) by maili.marvell.com (Postfix) with ESMTP id C7E093F7041; Mon, 23 Mar 2020 06:14:44 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Igor Russkikh" Subject: [PATCH net-next 03/17] net: macsec: allow to reference a netdev from a MACsec context Date: Mon, 23 Mar 2020 16:13:34 +0300 Message-ID: <20200323131348.340-4-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200323131348.340-1-irusskikh@marvell.com> References: <20200323131348.340-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-23_04:2020-03-21,2020-03-23 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Antoine Tenart This patch allows to reference a net_device from a MACsec context. This is needed to allow implementing MACsec operations in net device drivers. Signed-off-by: Antoine Tenart Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- include/net/macsec.h | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/include/net/macsec.h b/include/net/macsec.h index 2e4780dbf5c6..71de2c863df7 100644 --- a/include/net/macsec.h +++ b/include/net/macsec.h @@ -220,7 +220,10 @@ struct macsec_secy { * struct macsec_context - MACsec context for hardware offloading */ struct macsec_context { - struct phy_device *phydev; + union { + struct net_device *netdev; + struct phy_device *phydev; + }; enum macsec_offload offload; struct macsec_secy *secy; From patchwork Mon Mar 23 13:13:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222048 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 345D7C4332B for ; Mon, 23 Mar 2020 13:14:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0C3BB20722 for ; Mon, 23 Mar 2020 13:14:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="K2OZJhzQ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728405AbgCWNO4 (ORCPT ); Mon, 23 Mar 2020 09:14:56 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:55298 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728240AbgCWNOz (ORCPT ); Mon, 23 Mar 2020 09:14:55 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02ND6OGY010599; Mon, 23 Mar 2020 06:14:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=AfJvl4YQkpPHf0YXHp9mM8O4lFNcbvonW9a0wXBuoLo=; b=K2OZJhzQJUgdFzx+MPzLWGMBrfhxCqEzvYyITI/9122gLZrQHYLY9AHv29sF71cjTL5w LCD0jK4TxgcLPCKYbciXVUJ9nb09wp7h4ymS5IkI2vKwC/IYaar3ncUrmZ/bQiipM2wD NvJXoN63TG0JWlIe58UQwEakGhoJpZjqDgCYUDnxMd0v2QYgnz23xu2UDyoX49pPnf4l IeOJPff9gzRbq3cyVUGjEJ4+Ii3hgM4bmJ7YyLy5/AFyUOCHKY9GFTOUhO8nCtwdl8K/ csijgjSCFpVK6+CEXD8nIixTeKmwnIPAcz/9DILBsjW7aDnTzXFEXNA0IIC/Tdd4G56e lQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2ywg9nefqg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 23 Mar 2020 06:14:52 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 23 Mar 2020 06:14:50 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 23 Mar 2020 06:14:50 -0700 Received: from localhost.localdomain (unknown [10.9.16.91]) by maili.marvell.com (Postfix) with ESMTP id 1E2813F7041; Mon, 23 Mar 2020 06:14:48 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Dmitry Bogdanov" , Igor Russkikh Subject: [PATCH net-next 05/17] net: macsec: init secy pointer in macsec_context Date: Mon, 23 Mar 2020 16:13:36 +0300 Message-ID: <20200323131348.340-6-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200323131348.340-1-irusskikh@marvell.com> References: <20200323131348.340-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-23_04:2020-03-21,2020-03-23 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dmitry Bogdanov This patch adds secy pointer initialization in the macsec_context. It will be used by MAC drivers in offloading operations. Signed-off-by: Dmitry Bogdanov Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- drivers/net/macsec.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index c4d5f609871e..0f6808f3ff91 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -1793,6 +1793,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.rx_sa = rx_sa; + ctx.secy = secy; memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]), MACSEC_KEYID_LEN); @@ -1840,6 +1841,7 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info) struct nlattr **attrs = info->attrs; struct macsec_rx_sc *rx_sc; struct nlattr *tb_rxsc[MACSEC_RXSC_ATTR_MAX + 1]; + struct macsec_secy *secy; bool was_active; int ret; @@ -1859,6 +1861,7 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info) return PTR_ERR(dev); } + secy = &macsec_priv(dev)->secy; sci = nla_get_sci(tb_rxsc[MACSEC_RXSC_ATTR_SCI]); rx_sc = create_rx_sc(dev, sci); @@ -1882,6 +1885,7 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info) } ctx.rx_sc = rx_sc; + ctx.secy = secy; ret = macsec_offload(ops->mdo_add_rxsc, &ctx); if (ret) @@ -2031,6 +2035,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.tx_sa = tx_sa; + ctx.secy = secy; memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]), MACSEC_KEYID_LEN); @@ -2106,6 +2111,7 @@ static int macsec_del_rxsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.rx_sa = rx_sa; + ctx.secy = secy; ret = macsec_offload(ops->mdo_del_rxsa, &ctx); if (ret) @@ -2171,6 +2177,7 @@ static int macsec_del_rxsc(struct sk_buff *skb, struct genl_info *info) } ctx.rx_sc = rx_sc; + ctx.secy = secy; ret = macsec_offload(ops->mdo_del_rxsc, &ctx); if (ret) goto cleanup; @@ -2229,6 +2236,7 @@ static int macsec_del_txsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.tx_sa = tx_sa; + ctx.secy = secy; ret = macsec_offload(ops->mdo_del_txsa, &ctx); if (ret) @@ -2340,6 +2348,7 @@ static int macsec_upd_txsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.tx_sa = tx_sa; + ctx.secy = secy; ret = macsec_offload(ops->mdo_upd_txsa, &ctx); if (ret) @@ -2432,6 +2441,7 @@ static int macsec_upd_rxsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.rx_sa = rx_sa; + ctx.secy = secy; ret = macsec_offload(ops->mdo_upd_rxsa, &ctx); if (ret) @@ -2502,6 +2512,7 @@ static int macsec_upd_rxsc(struct sk_buff *skb, struct genl_info *info) } ctx.rx_sc = rx_sc; + ctx.secy = secy; ret = macsec_offload(ops->mdo_upd_rxsc, &ctx); if (ret) @@ -3369,6 +3380,7 @@ static int macsec_dev_open(struct net_device *dev) goto clear_allmulti; } + ctx.secy = &macsec->secy; err = macsec_offload(ops->mdo_dev_open, &ctx); if (err) goto clear_allmulti; @@ -3400,8 +3412,10 @@ static int macsec_dev_stop(struct net_device *dev) struct macsec_context ctx; ops = macsec_get_ops(macsec, &ctx); - if (ops) + if (ops) { + ctx.secy = &macsec->secy; macsec_offload(ops->mdo_dev_stop, &ctx); + } } dev_mc_unsync(real_dev, dev); From patchwork Mon Mar 23 13:13:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98812C54FCE for ; Mon, 23 Mar 2020 13:15:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6E4CF2072E for ; Mon, 23 Mar 2020 13:15:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="SG/3M/rs" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728424AbgCWNPB (ORCPT ); Mon, 23 Mar 2020 09:15:01 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:47056 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728412AbgCWNPB (ORCPT ); Mon, 23 Mar 2020 09:15:01 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02ND6LZN019104; Mon, 23 Mar 2020 06:14:58 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=73hHJf4OezBY000vSHrVIhFi5q3RobR97wc08AVFiyo=; b=SG/3M/rsUOMZ1s5HefuIBZthVoZEQnmk1sRUvHv8x+P3mECwG6EkULG5W8zERrAeHVtf rpf+AgAz1LOT1ZwkGEtb2jg+zGoKI7wvb3XtnsxCoMYTK490CglW6pg+Chzaidz/qN76 vdj3Brn/KDHKgQLGMk+12Fd2qc78HbQnwhLs95LxEI9c7eWRGqpG4ywp9b4BRyg0HR7c szNfH9so1SM415UCgJ4VtDC/DXFDzvvm4KCwri7ml6cxtq+gOQf+cLkcVlrSKJzpTLx9 B8w2Uvm6Ydt513907wguf+O/JNNV5a0RJLLEc3OL4YvDa77h9b+t5Ba615LbbudZzgEy BA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2ywvkqmn47-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 23 Mar 2020 06:14:58 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 23 Mar 2020 06:14:56 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 23 Mar 2020 06:14:55 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 23 Mar 2020 06:14:55 -0700 Received: from localhost.localdomain (unknown [10.9.16.91]) by maili.marvell.com (Postfix) with ESMTP id AD8073F7040; Mon, 23 Mar 2020 06:14:53 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Igor Russkikh" Subject: [PATCH net-next 07/17] net: macsec: support multicast/broadcast when offloading Date: Mon, 23 Mar 2020 16:13:38 +0300 Message-ID: <20200323131348.340-8-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200323131348.340-1-irusskikh@marvell.com> References: <20200323131348.340-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-23_04:2020-03-21,2020-03-23 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Starovoytov The idea is simple. If the frame is an exact match for the controlled port (based on DA comparison), then we simply divert this skb to matching port. Multicast/broadcast messages are delivered to all ports. Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- drivers/net/macsec.c | 51 +++++++++++++++++++++++++++++++++----------- 1 file changed, 38 insertions(+), 13 deletions(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index 5d1564cda7fe..884407d92f93 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -1005,22 +1005,53 @@ static enum rx_handler_result handle_not_macsec(struct sk_buff *skb) { /* Deliver to the uncontrolled port by default */ enum rx_handler_result ret = RX_HANDLER_PASS; + struct ethhdr *hdr = eth_hdr(skb); struct macsec_rxh_data *rxd; struct macsec_dev *macsec; rcu_read_lock(); rxd = macsec_data_rcu(skb->dev); - /* 10.6 If the management control validateFrames is not - * Strict, frames without a SecTAG are received, counted, and - * delivered to the Controlled Port - */ list_for_each_entry_rcu(macsec, &rxd->secys, secys) { struct sk_buff *nskb; struct pcpu_secy_stats *secy_stats = this_cpu_ptr(macsec->stats); + struct net_device *ndev = macsec->secy.netdev; - if (!macsec_is_offloaded(macsec) && - macsec->secy.validate_frames == MACSEC_VALIDATE_STRICT) { + /* If h/w offloading is enabled, HW decodes frames and strips + * the SecTAG, so we have to deduce which port to deliver to. + */ + if (macsec_is_offloaded(macsec) && netif_running(ndev)) { + if (ether_addr_equal_64bits(hdr->h_dest, + ndev->dev_addr)) { + /* exact match, divert skb to this port */ + skb->dev = ndev; + skb->pkt_type = PACKET_HOST; + ret = RX_HANDLER_ANOTHER; + goto out; + } else if (is_multicast_ether_addr_64bits( + hdr->h_dest)) { + /* multicast frame, deliver on this port too */ + nskb = skb_clone(skb, GFP_ATOMIC); + if (!nskb) + break; + + nskb->dev = ndev; + if (ether_addr_equal_64bits(hdr->h_dest, + ndev->broadcast)) + nskb->pkt_type = PACKET_BROADCAST; + else + nskb->pkt_type = PACKET_MULTICAST; + + netif_rx(nskb); + } + continue; + } + + /* 10.6 If the management control validateFrames is not + * Strict, frames without a SecTAG are received, counted, and + * delivered to the Controlled Port + */ + if (macsec->secy.validate_frames == MACSEC_VALIDATE_STRICT) { u64_stats_update_begin(&secy_stats->syncp); secy_stats->stats.InPktsNoTag++; u64_stats_update_end(&secy_stats->syncp); @@ -1032,19 +1063,13 @@ static enum rx_handler_result handle_not_macsec(struct sk_buff *skb) if (!nskb) break; - nskb->dev = macsec->secy.netdev; + nskb->dev = ndev; if (netif_rx(nskb) == NET_RX_SUCCESS) { u64_stats_update_begin(&secy_stats->syncp); secy_stats->stats.InPktsUntagged++; u64_stats_update_end(&secy_stats->syncp); } - - if (netif_running(macsec->secy.netdev) && - macsec_is_offloaded(macsec)) { - ret = RX_HANDLER_EXACT; - goto out; - } } out: From patchwork Mon Mar 23 13:13:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222046 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C1F0C3F2CD for ; Mon, 23 Mar 2020 13:15:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6DDCB20735 for ; Mon, 23 Mar 2020 13:15:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="TcV6k5ew" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728443AbgCWNPG (ORCPT ); Mon, 23 Mar 2020 09:15:06 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:42988 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728426AbgCWNPF (ORCPT ); Mon, 23 Mar 2020 09:15:05 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02ND6Mu3019109; Mon, 23 Mar 2020 06:15:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=27XNLfjtBUuh8iOjOQFchNQOTsTcMCRC1b7HI0X+t4M=; b=TcV6k5ewDL5VsnFzKnzITzN+uIHMTeKbZfY61HDT6olhA3fFjQyJMqYR2GOGF0mmwvrk zXkXfPQkIwTOHupAkW8l/JpYPZBg8/FYv5LtnsA/u6bbcx8iS6CoA03bYNaOYzsgaozp KH3a4XaI3JWxDIEuUMXRABtePkUqtSubn5vYqXSi1CEugUjcAOfmbtqb1CLiUdhvBPjb M+hfX4BMeZtDHCK2pJHL6bX9MSrU7zZ5vpQlRcH70DP5KfmoNTSJJKXQA6HRw/nTUKUz pduCtJxR0CqDdY6GyK4cdriBz2CaWhZqzRXsY6/HPuJu8z/+b8dja7MCUvJmrH3qV+9G Pw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2ywvkqmn4e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 23 Mar 2020 06:15:02 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 23 Mar 2020 06:15:00 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 23 Mar 2020 06:14:59 -0700 Received: from localhost.localdomain (unknown [10.9.16.91]) by maili.marvell.com (Postfix) with ESMTP id 6EFA93F7040; Mon, 23 Mar 2020 06:14:58 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Igor Russkikh" Subject: [PATCH net-next 09/17] net: macsec: report real_dev features when HW offloading is enabled Date: Mon, 23 Mar 2020 16:13:40 +0300 Message-ID: <20200323131348.340-10-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200323131348.340-1-irusskikh@marvell.com> References: <20200323131348.340-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-23_04:2020-03-21,2020-03-23 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Starovoytov This patch makes real_dev_feature propagation by MACSec offloaded device. Issue description: real_dev features are disabled upon macsec creation. Root cause: Features limitation (specific to SW MACSec limitation) is being applied to HW offloaded case as well. This causes 'set_features' request on the real_dev with reduced feature set due to chain propagation. Proposed solution: Report real_dev features when HW offloading is enabled. NB! MACSec offloaded device does not propagate VLAN offload features at the moment. This can potentially be added later on as a separate patch. Note: this patch requires HW offloading to be enabled by default in order to function properly. Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- drivers/net/macsec.c | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index 59bf7d5f39ff..da2e28e6c15d 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -2632,6 +2632,10 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info) goto rollback; rtnl_unlock(); + /* Force features update, since they are different for SW MACSec and + * HW offloading cases. + */ + netdev_update_features(dev); return 0; rollback: @@ -3398,9 +3402,16 @@ static netdev_tx_t macsec_start_xmit(struct sk_buff *skb, return ret; } -#define MACSEC_FEATURES \ +#define SW_MACSEC_FEATURES \ (NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST) +/* If h/w offloading is enabled, use real device features save for + * VLAN_FEATURES - they require additional ops + * HW_MACSEC - no reason to report it + */ +#define REAL_DEV_FEATURES(dev) \ + ((dev)->features & ~(NETIF_F_VLAN_FEATURES | NETIF_F_HW_MACSEC)) + static int macsec_dev_init(struct net_device *dev) { struct macsec_dev *macsec = macsec_priv(dev); @@ -3417,8 +3428,12 @@ static int macsec_dev_init(struct net_device *dev) return err; } - dev->features = real_dev->features & MACSEC_FEATURES; - dev->features |= NETIF_F_LLTX | NETIF_F_GSO_SOFTWARE; + if (macsec_is_offloaded(macsec)) { + dev->features = REAL_DEV_FEATURES(real_dev); + } else { + dev->features = real_dev->features & SW_MACSEC_FEATURES; + dev->features |= NETIF_F_LLTX | NETIF_F_GSO_SOFTWARE; + } dev->needed_headroom = real_dev->needed_headroom + MACSEC_NEEDED_HEADROOM; @@ -3447,7 +3462,10 @@ static netdev_features_t macsec_fix_features(struct net_device *dev, struct macsec_dev *macsec = macsec_priv(dev); struct net_device *real_dev = macsec->real_dev; - features &= (real_dev->features & MACSEC_FEATURES) | + if (macsec_is_offloaded(macsec)) + return REAL_DEV_FEATURES(real_dev); + + features &= (real_dev->features & SW_MACSEC_FEATURES) | NETIF_F_GSO_SOFTWARE | NETIF_F_SOFT_FEATURES; features |= NETIF_F_LLTX; From patchwork Mon Mar 23 13:13:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F6EDC4332B for ; Mon, 23 Mar 2020 13:15:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EF0C720722 for ; Mon, 23 Mar 2020 13:15:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="yd30+hc8" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728458AbgCWNPO (ORCPT ); Mon, 23 Mar 2020 09:15:14 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:13134 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728449AbgCWNPN (ORCPT ); Mon, 23 Mar 2020 09:15:13 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02ND6OGb010599; Mon, 23 Mar 2020 06:15:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=EPPDdLP4kTU24bjoPXzsPSSAQaWm1SCOrnWluKFUWDI=; b=yd30+hc83C7Y7YKZehKkZ/Pu/H239FGl+LEyD31rPCGO+X8hMnuvyK2WWNE+zI/mn0oS 9yDdSDbjPC2xn19a283k3FTmJM9glIs+qSFpeVD2C5kC6AtcsPoqPFrgVRsIGGYeSXco BuijcMGysO4kAttJeSm10FyKTtjqAChu+gt2I7c+ALraQfcvkf32bTYp9p6mzYZJcUpi 37oopRBJZnjcvGYRwLA6qHHQO7FhTU1tUZ4liQWb2zVu7L+2sI0BAFnnjRGsXHBMA3TT passLJ/f1NeMneQ5qNwVUFZ+i1TcokxI02ECMY2MuX5KmzUGCZczpg6jh1Imk0EjVc7T hw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2ywg9nefrt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 23 Mar 2020 06:15:09 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 23 Mar 2020 06:15:08 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 23 Mar 2020 06:15:08 -0700 Received: from localhost.localdomain (unknown [10.9.16.91]) by maili.marvell.com (Postfix) with ESMTP id F2D413F703F; Mon, 23 Mar 2020 06:15:05 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Dmitry Bogdanov" , Igor Russkikh Subject: [PATCH net-next 12/17] net: atlantic: MACSec egress offload implementation Date: Mon, 23 Mar 2020 16:13:43 +0300 Message-ID: <20200323131348.340-13-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200323131348.340-1-irusskikh@marvell.com> References: <20200323131348.340-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-23_04:2020-03-21,2020-03-23 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dmitry Bogdanov This patch adds support for MACSec egress HW offloading on Atlantic network cards. Signed-off-by: Dmitry Bogdanov Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- .../ethernet/aquantia/atlantic/aq_macsec.c | 681 +++++++++++++++++- .../ethernet/aquantia/atlantic/aq_macsec.h | 4 + 2 files changed, 677 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c index e9d8852bfbe0..cf5862958e92 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c @@ -7,44 +7,524 @@ #include "aq_nic.h" #include +#include "macsec/macsec_api.h" +#define AQ_MACSEC_KEY_LEN_128_BIT 16 +#define AQ_MACSEC_KEY_LEN_192_BIT 24 +#define AQ_MACSEC_KEY_LEN_256_BIT 32 + +enum aq_clear_type { + /* update HW configuration */ + AQ_CLEAR_HW = BIT(0), + /* update SW configuration (busy bits, pointers) */ + AQ_CLEAR_SW = BIT(1), + /* update both HW and SW configuration */ + AQ_CLEAR_ALL = AQ_CLEAR_HW | AQ_CLEAR_SW, +}; + +static int aq_clear_txsc(struct aq_nic_s *nic, const int txsc_idx, + enum aq_clear_type clear_type); +static int aq_clear_txsa(struct aq_nic_s *nic, struct aq_macsec_txsc *aq_txsc, + const int sa_num, enum aq_clear_type clear_type); +static int aq_clear_secy(struct aq_nic_s *nic, const struct macsec_secy *secy, + enum aq_clear_type clear_type); +static int aq_apply_macsec_cfg(struct aq_nic_s *nic); +static int aq_apply_secy_cfg(struct aq_nic_s *nic, + const struct macsec_secy *secy); + +static void aq_ether_addr_to_mac(u32 mac[2], unsigned char *emac) +{ + u32 tmp[2] = { 0 }; + + memcpy(((u8 *)tmp) + 2, emac, ETH_ALEN); + + mac[0] = swab32(tmp[1]); + mac[1] = swab32(tmp[0]); +} + +/* There's a 1:1 mapping between SecY and TX SC */ +static int aq_get_txsc_idx_from_secy(struct aq_macsec_cfg *macsec_cfg, + const struct macsec_secy *secy) +{ + int i; + + if (unlikely(!secy)) + return -1; + + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (macsec_cfg->aq_txsc[i].sw_secy == secy) + return i; + } + return -1; +} + +static int aq_get_txsc_idx_from_sc_idx(const enum aq_macsec_sc_sa sc_sa, + const int sc_idx) +{ + switch (sc_sa) { + case aq_macsec_sa_sc_4sa_8sc: + return sc_idx >> 2; + case aq_macsec_sa_sc_2sa_16sc: + return sc_idx >> 1; + case aq_macsec_sa_sc_1sa_32sc: + return sc_idx; + default: + WARN_ONCE(1, "Invalid sc_sa"); + } + return -1; +} + +/* Rotate keys u32[8] */ +static void aq_rotate_keys(u32 (*key)[8], const int key_len) +{ + u32 tmp[8] = { 0 }; + + memcpy(&tmp, key, sizeof(tmp)); + memset(*key, 0, sizeof(*key)); + + if (key_len == AQ_MACSEC_KEY_LEN_128_BIT) { + (*key)[0] = swab32(tmp[3]); + (*key)[1] = swab32(tmp[2]); + (*key)[2] = swab32(tmp[1]); + (*key)[3] = swab32(tmp[0]); + } else if (key_len == AQ_MACSEC_KEY_LEN_192_BIT) { + (*key)[0] = swab32(tmp[5]); + (*key)[1] = swab32(tmp[4]); + (*key)[2] = swab32(tmp[3]); + (*key)[3] = swab32(tmp[2]); + (*key)[4] = swab32(tmp[1]); + (*key)[5] = swab32(tmp[0]); + } else if (key_len == AQ_MACSEC_KEY_LEN_256_BIT) { + (*key)[0] = swab32(tmp[7]); + (*key)[1] = swab32(tmp[6]); + (*key)[2] = swab32(tmp[5]); + (*key)[3] = swab32(tmp[4]); + (*key)[4] = swab32(tmp[3]); + (*key)[5] = swab32(tmp[2]); + (*key)[6] = swab32(tmp[1]); + (*key)[7] = swab32(tmp[0]); + } else { + pr_warn("Rotate_keys: invalid key_len\n"); + } +} + static int aq_mdo_dev_open(struct macsec_context *ctx) { - return 0; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + int ret = 0; + + if (ctx->prepare) + return 0; + + if (netif_carrier_ok(nic->ndev)) + ret = aq_apply_secy_cfg(nic, ctx->secy); + + return ret; } static int aq_mdo_dev_stop(struct macsec_context *ctx) { + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + int i; + + if (ctx->prepare) + return 0; + + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (nic->macsec_cfg->txsc_idx_busy & BIT(i)) + aq_clear_secy(nic, nic->macsec_cfg->aq_txsc[i].sw_secy, + AQ_CLEAR_HW); + } + return 0; } +static int aq_set_txsc(struct aq_nic_s *nic, const int txsc_idx) +{ + struct aq_macsec_txsc *aq_txsc = &nic->macsec_cfg->aq_txsc[txsc_idx]; + struct aq_mss_egress_class_record tx_class_rec = { 0 }; + const struct macsec_secy *secy = aq_txsc->sw_secy; + struct aq_mss_egress_sc_record sc_rec = { 0 }; + unsigned int sc_idx = aq_txsc->hw_sc_idx; + struct aq_hw_s *hw = nic->aq_hw; + __be64 nsci; + int ret = 0; + + aq_ether_addr_to_mac(tx_class_rec.mac_sa, secy->netdev->dev_addr); + + netdev_dbg(nic->ndev, + "set secy: sci %#llx, sc_idx=%d, protect=%d, curr_an=%d\n", + secy->sci, sc_idx, secy->protect_frames, + secy->tx_sc.encoding_sa); + + nsci = cpu_to_be64((__force u64)secy->sci); + memcpy(tx_class_rec.sci, &nsci, sizeof(nsci)); + tx_class_rec.sci_mask = 0; + + tx_class_rec.sa_mask = 0x3f; + + tx_class_rec.action = 0; /* forward to SA/SC table */ + tx_class_rec.valid = 1; + + tx_class_rec.sc_idx = sc_idx; + + tx_class_rec.sc_sa = nic->macsec_cfg->sc_sa; + + ret = aq_mss_set_egress_class_record(hw, &tx_class_rec, txsc_idx); + if (ret) + return ret; + + sc_rec.protect = secy->protect_frames; + if (secy->tx_sc.encrypt) + sc_rec.tci |= BIT(1); + if (secy->tx_sc.scb) + sc_rec.tci |= BIT(2); + if (secy->tx_sc.send_sci) + sc_rec.tci |= BIT(3); + if (secy->tx_sc.end_station) + sc_rec.tci |= BIT(4); + /* The C bit is clear if and only if the Secure Data is + * exactly the same as the User Data and the ICV is 16 octets long. + */ + if (!(secy->icv_len == 16 && !secy->tx_sc.encrypt)) + sc_rec.tci |= BIT(0); + + sc_rec.an_roll = 0; + + switch (secy->key_len) { + case AQ_MACSEC_KEY_LEN_128_BIT: + sc_rec.sak_len = 0; + break; + case AQ_MACSEC_KEY_LEN_192_BIT: + sc_rec.sak_len = 1; + break; + case AQ_MACSEC_KEY_LEN_256_BIT: + sc_rec.sak_len = 2; + break; + default: + WARN_ONCE(1, "Invalid sc_sa"); + return -EINVAL; + } + + sc_rec.curr_an = secy->tx_sc.encoding_sa; + sc_rec.valid = 1; + sc_rec.fresh = 1; + + return aq_mss_set_egress_sc_record(hw, &sc_rec, sc_idx); +} + +static u32 aq_sc_idx_max(const enum aq_macsec_sc_sa sc_sa) +{ + u32 result = 0; + + switch (sc_sa) { + case aq_macsec_sa_sc_4sa_8sc: + result = 8; + break; + case aq_macsec_sa_sc_2sa_16sc: + result = 16; + break; + case aq_macsec_sa_sc_1sa_32sc: + result = 32; + break; + default: + break; + }; + + return result; +} + +static u32 aq_to_hw_sc_idx(const u32 sc_idx, const enum aq_macsec_sc_sa sc_sa) +{ + switch (sc_sa) { + case aq_macsec_sa_sc_4sa_8sc: + return sc_idx << 2; + case aq_macsec_sa_sc_2sa_16sc: + return sc_idx << 1; + case aq_macsec_sa_sc_1sa_32sc: + return sc_idx; + default: + /* Should never happen */ + break; + }; + + WARN_ON(true); + return sc_idx; +} + +static enum aq_macsec_sc_sa sc_sa_from_num_an(const int num_an) +{ + enum aq_macsec_sc_sa sc_sa = aq_macsec_sa_sc_not_used; + + switch (num_an) { + case 4: + sc_sa = aq_macsec_sa_sc_4sa_8sc; + break; + case 2: + sc_sa = aq_macsec_sa_sc_2sa_16sc; + break; + case 1: + sc_sa = aq_macsec_sa_sc_1sa_32sc; + break; + default: + break; + } + + return sc_sa; +} + static int aq_mdo_add_secy(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + const struct macsec_secy *secy = ctx->secy; + enum aq_macsec_sc_sa sc_sa; + u32 txsc_idx; + int ret = 0; + + sc_sa = sc_sa_from_num_an(MACSEC_NUM_AN); + if (sc_sa == aq_macsec_sa_sc_not_used) + return -EINVAL; + + if (hweight32(cfg->txsc_idx_busy) >= aq_sc_idx_max(sc_sa)) + return -ENOSPC; + + txsc_idx = ffz(cfg->txsc_idx_busy); + if (txsc_idx == AQ_MACSEC_MAX_SC) + return -ENOSPC; + + if (ctx->prepare) + return 0; + + cfg->sc_sa = sc_sa; + cfg->aq_txsc[txsc_idx].hw_sc_idx = aq_to_hw_sc_idx(txsc_idx, sc_sa); + cfg->aq_txsc[txsc_idx].sw_secy = secy; + netdev_dbg(nic->ndev, "add secy: txsc_idx=%d, sc_idx=%d\n", txsc_idx, + cfg->aq_txsc[txsc_idx].hw_sc_idx); + + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_set_txsc(nic, txsc_idx); + + set_bit(txsc_idx, &cfg->txsc_idx_busy); + + return 0; } static int aq_mdo_upd_secy(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + const struct macsec_secy *secy = ctx->secy; + int txsc_idx; + int ret = 0; + + txsc_idx = aq_get_txsc_idx_from_secy(nic->macsec_cfg, secy); + if (txsc_idx < 0) + return -ENOENT; + + if (ctx->prepare) + return 0; + + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_set_txsc(nic, txsc_idx); + + return ret; +} + +static int aq_clear_txsc(struct aq_nic_s *nic, const int txsc_idx, + enum aq_clear_type clear_type) +{ + struct aq_macsec_txsc *tx_sc = &nic->macsec_cfg->aq_txsc[txsc_idx]; + struct aq_mss_egress_class_record tx_class_rec = { 0 }; + struct aq_mss_egress_sc_record sc_rec = { 0 }; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + int sa_num; + + for_each_set_bit (sa_num, &tx_sc->tx_sa_idx_busy, AQ_MACSEC_MAX_SA) { + ret = aq_clear_txsa(nic, tx_sc, sa_num, clear_type); + if (ret) + return ret; + } + + if (clear_type & AQ_CLEAR_HW) { + ret = aq_mss_set_egress_class_record(hw, &tx_class_rec, + txsc_idx); + if (ret) + return ret; + + sc_rec.fresh = 1; + ret = aq_mss_set_egress_sc_record(hw, &sc_rec, + tx_sc->hw_sc_idx); + if (ret) + return ret; + } + + if (clear_type & AQ_CLEAR_SW) { + clear_bit(txsc_idx, &nic->macsec_cfg->txsc_idx_busy); + nic->macsec_cfg->aq_txsc[txsc_idx].sw_secy = NULL; + } + + return ret; } static int aq_mdo_del_secy(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + int ret = 0; + + if (ctx->prepare) + return 0; + + if (!nic->macsec_cfg) + return 0; + + ret = aq_clear_secy(nic, ctx->secy, AQ_CLEAR_ALL); + + return ret; +} + +static int aq_update_txsa(struct aq_nic_s *nic, const unsigned int sc_idx, + const struct macsec_secy *secy, + const struct macsec_tx_sa *tx_sa, + const unsigned char *key, const unsigned char an) +{ + struct aq_mss_egress_sakey_record key_rec; + const unsigned int sa_idx = sc_idx | an; + struct aq_mss_egress_sa_record sa_rec; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + + netdev_dbg(nic->ndev, "set tx_sa %d: active=%d, next_pn=%d\n", an, + tx_sa->active, tx_sa->next_pn); + + memset(&sa_rec, 0, sizeof(sa_rec)); + sa_rec.valid = tx_sa->active; + sa_rec.fresh = 1; + sa_rec.next_pn = tx_sa->next_pn; + + ret = aq_mss_set_egress_sa_record(hw, &sa_rec, sa_idx); + if (ret) { + netdev_err(nic->ndev, + "aq_mss_set_egress_sa_record failed with %d\n", ret); + return ret; + } + + if (!key) + return ret; + + memset(&key_rec, 0, sizeof(key_rec)); + memcpy(&key_rec.key, key, secy->key_len); + + aq_rotate_keys(&key_rec.key, secy->key_len); + + ret = aq_mss_set_egress_sakey_record(hw, &key_rec, sa_idx); + if (ret) + netdev_err(nic->ndev, + "aq_mss_set_egress_sakey_record failed with %d\n", + ret); + + return ret; } static int aq_mdo_add_txsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + const struct macsec_secy *secy = ctx->secy; + struct aq_macsec_txsc *aq_txsc; + int txsc_idx; + int ret = 0; + + txsc_idx = aq_get_txsc_idx_from_secy(cfg, secy); + if (txsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + aq_txsc = &cfg->aq_txsc[txsc_idx]; + set_bit(ctx->sa.assoc_num, &aq_txsc->tx_sa_idx_busy); + + memcpy(aq_txsc->tx_sa_key[ctx->sa.assoc_num], ctx->sa.key, + secy->key_len); + + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_update_txsa(nic, aq_txsc->hw_sc_idx, secy, + ctx->sa.tx_sa, ctx->sa.key, + ctx->sa.assoc_num); + + return ret; } static int aq_mdo_upd_txsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + const struct macsec_secy *secy = ctx->secy; + struct aq_macsec_txsc *aq_txsc; + int txsc_idx; + int ret = 0; + + txsc_idx = aq_get_txsc_idx_from_secy(cfg, secy); + if (txsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + aq_txsc = &cfg->aq_txsc[txsc_idx]; + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_update_txsa(nic, aq_txsc->hw_sc_idx, secy, + ctx->sa.tx_sa, NULL, ctx->sa.assoc_num); + + return ret; +} + +static int aq_clear_txsa(struct aq_nic_s *nic, struct aq_macsec_txsc *aq_txsc, + const int sa_num, enum aq_clear_type clear_type) +{ + const int sa_idx = aq_txsc->hw_sc_idx | sa_num; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + + if (clear_type & AQ_CLEAR_SW) + clear_bit(sa_num, &aq_txsc->tx_sa_idx_busy); + + if ((clear_type & AQ_CLEAR_HW) && netif_carrier_ok(nic->ndev)) { + struct aq_mss_egress_sakey_record key_rec; + struct aq_mss_egress_sa_record sa_rec; + + memset(&sa_rec, 0, sizeof(sa_rec)); + sa_rec.fresh = 1; + + ret = aq_mss_set_egress_sa_record(hw, &sa_rec, sa_idx); + if (ret) + return ret; + + memset(&key_rec, 0, sizeof(key_rec)); + return aq_mss_set_egress_sakey_record(hw, &key_rec, sa_idx); + } + + return 0; } static int aq_mdo_del_txsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + int txsc_idx; + int ret = 0; + + txsc_idx = aq_get_txsc_idx_from_secy(cfg, ctx->secy); + if (txsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + ret = aq_clear_txsa(nic, &cfg->aq_txsc[txsc_idx], ctx->sa.assoc_num, + AQ_CLEAR_ALL); + + return ret; } static int aq_mdo_add_rxsc(struct macsec_context *ctx) @@ -77,8 +557,170 @@ static int aq_mdo_del_rxsa(struct macsec_context *ctx) return -EOPNOTSUPP; } +static int apply_txsc_cfg(struct aq_nic_s *nic, const int txsc_idx) +{ + struct aq_macsec_txsc *aq_txsc = &nic->macsec_cfg->aq_txsc[txsc_idx]; + const struct macsec_secy *secy = aq_txsc->sw_secy; + struct macsec_tx_sa *tx_sa; + int ret = 0; + int i; + + if (!netif_running(secy->netdev)) + return ret; + + ret = aq_set_txsc(nic, txsc_idx); + if (ret) + return ret; + + for (i = 0; i < MACSEC_NUM_AN; i++) { + tx_sa = rcu_dereference_bh(secy->tx_sc.sa[i]); + if (tx_sa) { + ret = aq_update_txsa(nic, aq_txsc->hw_sc_idx, secy, + tx_sa, aq_txsc->tx_sa_key[i], i); + if (ret) + return ret; + } + } + + return ret; +} + +static int aq_clear_secy(struct aq_nic_s *nic, const struct macsec_secy *secy, + enum aq_clear_type clear_type) +{ + int txsc_idx; + int ret = 0; + + txsc_idx = aq_get_txsc_idx_from_secy(nic->macsec_cfg, secy); + if (txsc_idx >= 0) { + ret = aq_clear_txsc(nic, txsc_idx, clear_type); + if (ret) + return ret; + } + + return ret; +} + +static int aq_apply_secy_cfg(struct aq_nic_s *nic, + const struct macsec_secy *secy) +{ + int txsc_idx; + int ret = 0; + + txsc_idx = aq_get_txsc_idx_from_secy(nic->macsec_cfg, secy); + if (txsc_idx >= 0) + apply_txsc_cfg(nic, txsc_idx); + + return ret; +} + +static int aq_apply_macsec_cfg(struct aq_nic_s *nic) +{ + int ret = 0; + int i; + + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (nic->macsec_cfg->txsc_idx_busy & BIT(i)) { + ret = apply_txsc_cfg(nic, i); + if (ret) + return ret; + } + } + + return ret; +} + +static int aq_sa_from_sa_idx(const enum aq_macsec_sc_sa sc_sa, const int sa_idx) +{ + switch (sc_sa) { + case aq_macsec_sa_sc_4sa_8sc: + return sa_idx & 3; + case aq_macsec_sa_sc_2sa_16sc: + return sa_idx & 1; + case aq_macsec_sa_sc_1sa_32sc: + return 0; + default: + WARN_ONCE(1, "Invalid sc_sa"); + } + return -EINVAL; +} + +static int aq_sc_idx_from_sa_idx(const enum aq_macsec_sc_sa sc_sa, + const int sa_idx) +{ + switch (sc_sa) { + case aq_macsec_sa_sc_4sa_8sc: + return sa_idx & ~3; + case aq_macsec_sa_sc_2sa_16sc: + return sa_idx & ~1; + case aq_macsec_sa_sc_1sa_32sc: + return sa_idx; + default: + WARN_ONCE(1, "Invalid sc_sa"); + } + return -EINVAL; +} + static void aq_check_txsa_expiration(struct aq_nic_s *nic) { + u32 egress_sa_expired, egress_sa_threshold_expired; + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + struct aq_hw_s *hw = nic->aq_hw; + struct aq_macsec_txsc *aq_txsc; + const struct macsec_secy *secy; + int sc_idx = 0, txsc_idx = 0; + enum aq_macsec_sc_sa sc_sa; + struct macsec_tx_sa *tx_sa; + unsigned char an = 0; + int ret; + int i; + + sc_sa = cfg->sc_sa; + + ret = aq_mss_get_egress_sa_expired(hw, &egress_sa_expired); + if (unlikely(ret)) + return; + + ret = aq_mss_get_egress_sa_threshold_expired(hw, + &egress_sa_threshold_expired); + + for (i = 0; i < AQ_MACSEC_MAX_SA; i++) { + if (egress_sa_expired & BIT(i)) { + an = aq_sa_from_sa_idx(sc_sa, i); + sc_idx = aq_sc_idx_from_sa_idx(sc_sa, i); + txsc_idx = aq_get_txsc_idx_from_sc_idx(sc_sa, sc_idx); + if (txsc_idx < 0) + continue; + + aq_txsc = &cfg->aq_txsc[txsc_idx]; + if (!(cfg->txsc_idx_busy & BIT(txsc_idx))) { + netdev_warn(nic->ndev, + "PN threshold expired on invalid TX SC"); + continue; + } + + secy = aq_txsc->sw_secy; + if (!netif_running(secy->netdev)) { + netdev_warn(nic->ndev, + "PN threshold expired on down TX SC"); + continue; + } + + if (unlikely(!(aq_txsc->tx_sa_idx_busy & BIT(an)))) { + netdev_warn(nic->ndev, + "PN threshold expired on invalid TX SA"); + continue; + } + + tx_sa = rcu_dereference_bh(secy->tx_sc.sa[an]); + macsec_pn_wrapped((struct macsec_secy *)secy, tx_sa); + } + } + + aq_mss_set_egress_sa_expired(hw, egress_sa_expired); + if (likely(!ret)) + aq_mss_set_egress_sa_threshold_expired(hw, + egress_sa_threshold_expired); } const struct macsec_ops aq_macsec_ops = { @@ -129,10 +771,13 @@ void aq_macsec_free(struct aq_nic_s *nic) int aq_macsec_enable(struct aq_nic_s *nic) { + u32 ctl_ether_types[1] = { ETH_P_PAE }; struct macsec_msg_fw_response resp = { 0 }; struct macsec_msg_fw_request msg = { 0 }; struct aq_hw_s *hw = nic->aq_hw; - int ret = 0; + int num_ctl_ether_types = 0; + int index = 0, tbl_idx; + int ret; if (!nic->macsec_cfg) return 0; @@ -155,6 +800,26 @@ int aq_macsec_enable(struct aq_nic_s *nic) goto unlock; } + /* Init Ethertype bypass filters */ + for (index = 0; index < ARRAY_SIZE(ctl_ether_types); index++) { + struct aq_mss_egress_ctlf_record tx_ctlf_rec; + + if (ctl_ether_types[index] == 0) + continue; + + memset(&tx_ctlf_rec, 0, sizeof(tx_ctlf_rec)); + tx_ctlf_rec.eth_type = ctl_ether_types[index]; + tx_ctlf_rec.match_type = 4; /* Match eth_type only */ + tx_ctlf_rec.match_mask = 0xf; /* match for eth_type */ + tx_ctlf_rec.action = 0; /* Bypass MACSEC modules */ + tbl_idx = NUMROWS_EGRESSCTLFRECORD - num_ctl_ether_types - 1; + aq_mss_set_egress_ctlf_record(hw, &tx_ctlf_rec, tbl_idx); + + num_ctl_ether_types++; + } + + ret = aq_apply_macsec_cfg(nic); + unlock: rtnl_unlock(); return ret; diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h index e4c4cf3bea38..5ab0ee4bea73 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h @@ -24,6 +24,10 @@ enum aq_macsec_sc_sa { }; struct aq_macsec_txsc { + u32 hw_sc_idx; + unsigned long tx_sa_idx_busy; + const struct macsec_secy *sw_secy; + u8 tx_sa_key[MACSEC_NUM_AN][MACSEC_KEYID_LEN]; }; struct aq_macsec_rxsc { From patchwork Mon Mar 23 13:13:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C6AFC54FCF for ; Mon, 23 Mar 2020 13:15:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0098920722 for ; Mon, 23 Mar 2020 13:15:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="cvyZ5zWE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728475AbgCWNPU (ORCPT ); Mon, 23 Mar 2020 09:15:20 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:54344 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728426AbgCWNPT (ORCPT ); Mon, 23 Mar 2020 09:15:19 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02ND60gO010448; Mon, 23 Mar 2020 06:15:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=nB5L/YQhDzKiQmfNfLVlTY1CBtwZFysLDTwKo2eUZhM=; b=cvyZ5zWEQabz2WXLNmWY0hn/5ztZJLA0uZZjuhYR1mvDoUCyuNN7fLnMKFAKdViQ3Wbp OLkdHkqEauwf3xPlgnxs649OLbjweoRenreUH85swV43wm/tI6hyC/lg5AewZ0eUBjqN tZl5YTf8ophRFGAJywTgXvRn9QV54i+WKHBPB0eHkz7r6lENGMcgCTLNqJg0MQVvVr8t 7XhdwdTn2HfXeWvMLDEESkL5YjgtfCNHdTfpAcJjebniIksAWzj7DniuQq9EkiUXbwnG ZSEQxqOUYBPX6qsGgimfCJwWelPORGVyBNkT8ALgRelH5AoLZtja4kaht40L8bYONEnr ZQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2ywg9nefrx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 23 Mar 2020 06:15:12 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 23 Mar 2020 06:15:10 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 23 Mar 2020 06:15:10 -0700 Received: from localhost.localdomain (unknown [10.9.16.91]) by maili.marvell.com (Postfix) with ESMTP id D96FF3F7040; Mon, 23 Mar 2020 06:15:08 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Igor Russkikh" Subject: [PATCH net-next 13/17] net: atlantic: MACSec ingress offload HW bindings Date: Mon, 23 Mar 2020 16:13:44 +0300 Message-ID: <20200323131348.340-14-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200323131348.340-1-irusskikh@marvell.com> References: <20200323131348.340-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-23_04:2020-03-21,2020-03-23 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Starovoytov This patch adds the Atlantic HW-specific bindings for MACSec ingress, e.g. register addresses / structs, helper function, etc, which will be used by actual callback implementations. Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- .../atlantic/macsec/MSS_Ingress_registers.h | 77 + .../aquantia/atlantic/macsec/macsec_api.c | 1239 +++++++++++++++++ .../aquantia/atlantic/macsec/macsec_api.h | 148 ++ .../aquantia/atlantic/macsec/macsec_struct.h | 383 +++++ 4 files changed, 1847 insertions(+) create mode 100644 drivers/net/ethernet/aquantia/atlantic/macsec/MSS_Ingress_registers.h diff --git a/drivers/net/ethernet/aquantia/atlantic/macsec/MSS_Ingress_registers.h b/drivers/net/ethernet/aquantia/atlantic/macsec/MSS_Ingress_registers.h new file mode 100644 index 000000000000..d4c00d9a0fc6 --- /dev/null +++ b/drivers/net/ethernet/aquantia/atlantic/macsec/MSS_Ingress_registers.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Atlantic Network Driver + * Copyright (C) 2020 Marvell International Ltd. + */ + +#ifndef MSS_INGRESS_REGS_HEADER +#define MSS_INGRESS_REGS_HEADER + +#define MSS_INGRESS_CTL_REGISTER_ADDR 0x0000800E +#define MSS_INGRESS_LUT_ADDR_CTL_REGISTER_ADDR 0x00008080 +#define MSS_INGRESS_LUT_CTL_REGISTER_ADDR 0x00008081 +#define MSS_INGRESS_LUT_DATA_CTL_REGISTER_ADDR 0x000080A0 + +struct mss_ingress_ctl_register { + union { + struct { + unsigned int soft_reset : 1; + unsigned int operation_point_to_point : 1; + unsigned int create_sci : 1; + /* Unused */ + unsigned int mask_short_length_error : 1; + unsigned int drop_kay_packet : 1; + unsigned int drop_igprc_miss : 1; + /* Unused */ + unsigned int check_icv : 1; + unsigned int clear_global_time : 1; + unsigned int clear_count : 1; + unsigned int high_prio : 1; + unsigned int remove_sectag : 1; + unsigned int global_validate_frames : 2; + unsigned int icv_lsb_8bytes_enabled : 1; + unsigned int reserved0 : 2; + } bits_0; + unsigned short word_0; + }; + union { + struct { + unsigned int reserved0 : 16; + } bits_1; + unsigned short word_1; + }; +}; + +struct mss_ingress_lut_addr_ctl_register { + union { + struct { + unsigned int lut_addr : 9; + unsigned int reserved0 : 3; + /* 0x0 : Ingress Pre-Security MAC Control FIlter + * (IGPRCTLF) LUT + * 0x1 : Ingress Pre-Security Classification LUT (IGPRC) + * 0x2 : Ingress Packet Format (IGPFMT) SAKey LUT + * 0x3 : Ingress Packet Format (IGPFMT) SC/SA LUT + * 0x4 : Ingress Post-Security Classification LUT + * (IGPOC) + * 0x5 : Ingress Post-Security MAC Control Filter + * (IGPOCTLF) LUT + * 0x6 : Ingress MIB (IGMIB) + */ + unsigned int lut_select : 4; + } bits_0; + unsigned short word_0; + }; +}; + +struct mss_ingress_lut_ctl_register { + union { + struct { + unsigned int reserved0 : 14; + unsigned int lut_read : 1; + unsigned int lut_write : 1; + } bits_0; + unsigned short word_0; + }; +}; + +#endif /* MSS_INGRESS_REGS_HEADER */ diff --git a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c index 8448df694770..f2316d965715 100644 --- a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c +++ b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c @@ -5,6 +5,7 @@ #include "macsec_api.h" #include +#include "MSS_Ingress_registers.h" #include "MSS_Egress_registers.h" #include "aq_phy.h" @@ -55,6 +56,115 @@ static int aq_mss_mdio_write(struct aq_hw_s *hw, u16 mmd, u16 addr, u16 data) * MACSEC config and status ******************************************************************************/ +static int set_raw_ingress_record(struct aq_hw_s *hw, u16 *packed_record, + u8 num_words, u8 table_id, + u16 table_index) +{ + struct mss_ingress_lut_addr_ctl_register lut_sel_reg; + struct mss_ingress_lut_ctl_register lut_op_reg; + + unsigned int i; + + /* NOTE: MSS registers must always be read/written as adjacent pairs. + * For instance, to write either or both 1E.80A0 and 80A1, we have to: + * 1. Write 1E.80A0 first + * 2. Then write 1E.80A1 + * + * For HHD devices: These writes need to be performed consecutively, and + * to ensure this we use the PIF mailbox to delegate the reads/writes to + * the FW. + * + * For EUR devices: Not need to use the PIF mailbox; it is safe to + * write to the registers directly. + */ + + /* Write the packed record words to the data buffer registers. */ + for (i = 0; i < num_words; i += 2) { + aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_INGRESS_LUT_DATA_CTL_REGISTER_ADDR + i, + packed_record[i]); + aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_INGRESS_LUT_DATA_CTL_REGISTER_ADDR + i + + 1, + packed_record[i + 1]); + } + + /* Clear out the unused data buffer registers. */ + for (i = num_words; i < 24; i += 2) { + aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_INGRESS_LUT_DATA_CTL_REGISTER_ADDR + i, + 0); + aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_INGRESS_LUT_DATA_CTL_REGISTER_ADDR + i + 1, 0); + } + + /* Select the table and row index to write to */ + lut_sel_reg.bits_0.lut_select = table_id; + lut_sel_reg.bits_0.lut_addr = table_index; + + lut_op_reg.bits_0.lut_read = 0; + lut_op_reg.bits_0.lut_write = 1; + + aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_INGRESS_LUT_ADDR_CTL_REGISTER_ADDR, + lut_sel_reg.word_0); + aq_mss_mdio_write(hw, MDIO_MMD_VEND1, MSS_INGRESS_LUT_CTL_REGISTER_ADDR, + lut_op_reg.word_0); + + return 0; +} + +/*! Read the specified Ingress LUT table row. + * packed_record - [OUT] The table row data (raw). + */ +static int get_raw_ingress_record(struct aq_hw_s *hw, u16 *packed_record, + u8 num_words, u8 table_id, + u16 table_index) +{ + struct mss_ingress_lut_addr_ctl_register lut_sel_reg; + struct mss_ingress_lut_ctl_register lut_op_reg; + int ret; + + unsigned int i; + + /* Select the table and row index to read */ + lut_sel_reg.bits_0.lut_select = table_id; + lut_sel_reg.bits_0.lut_addr = table_index; + + lut_op_reg.bits_0.lut_read = 1; + lut_op_reg.bits_0.lut_write = 0; + + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_INGRESS_LUT_ADDR_CTL_REGISTER_ADDR, + lut_sel_reg.word_0); + if (unlikely(ret)) + return ret; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_INGRESS_LUT_CTL_REGISTER_ADDR, + lut_op_reg.word_0); + if (unlikely(ret)) + return ret; + + memset(packed_record, 0, sizeof(u16) * num_words); + + for (i = 0; i < num_words; i += 2) { + ret = aq_mss_mdio_read(hw, MDIO_MMD_VEND1, + MSS_INGRESS_LUT_DATA_CTL_REGISTER_ADDR + + i, + &packed_record[i]); + if (unlikely(ret)) + return ret; + ret = aq_mss_mdio_read(hw, MDIO_MMD_VEND1, + MSS_INGRESS_LUT_DATA_CTL_REGISTER_ADDR + + i + 1, + &packed_record[i + 1]); + if (unlikely(ret)) + return ret; + } + + return 0; +} + /*! Write packed_record to the specified Egress LUT table row. */ static int set_raw_egress_record(struct aq_hw_s *hw, u16 *packed_record, u8 num_words, u8 table_id, @@ -148,6 +258,1135 @@ static int get_raw_egress_record(struct aq_hw_s *hw, u16 *packed_record, return 0; } +static int +set_ingress_prectlf_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_prectlf_record *rec, + u16 table_index) +{ + u16 packed_record[6]; + + if (table_index >= NUMROWS_INGRESSPRECTLFRECORD) + return -EINVAL; + + memset(packed_record, 0, sizeof(u16) * 6); + + packed_record[0] = (packed_record[0] & 0x0000) | + (((rec->sa_da[0] >> 0) & 0xFFFF) << 0); + packed_record[1] = (packed_record[1] & 0x0000) | + (((rec->sa_da[0] >> 16) & 0xFFFF) << 0); + packed_record[2] = (packed_record[2] & 0x0000) | + (((rec->sa_da[1] >> 0) & 0xFFFF) << 0); + packed_record[3] = (packed_record[3] & 0x0000) | + (((rec->eth_type >> 0) & 0xFFFF) << 0); + packed_record[4] = (packed_record[4] & 0x0000) | + (((rec->match_mask >> 0) & 0xFFFF) << 0); + packed_record[5] = (packed_record[5] & 0xFFF0) | + (((rec->match_type >> 0) & 0xF) << 0); + packed_record[5] = + (packed_record[5] & 0xFFEF) | (((rec->action >> 0) & 0x1) << 4); + + return set_raw_ingress_record(hw, packed_record, 6, 0, + ROWOFFSET_INGRESSPRECTLFRECORD + + table_index); +} + +int aq_mss_set_ingress_prectlf_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_prectlf_record *rec, + u16 table_index) +{ + return AQ_API_CALL_SAFE(set_ingress_prectlf_record, hw, rec, + table_index); +} + +static int get_ingress_prectlf_record(struct aq_hw_s *hw, + struct aq_mss_ingress_prectlf_record *rec, + u16 table_index) +{ + u16 packed_record[6]; + int ret; + + if (table_index >= NUMROWS_INGRESSPRECTLFRECORD) + return -EINVAL; + + /* If the row that we want to read is odd, first read the previous even + * row, throw that value away, and finally read the desired row. + * This is a workaround for EUR devices that allows us to read + * odd-numbered rows. For HHD devices: this workaround will not work, + * so don't bother; odd-numbered rows are not readable. + */ + if ((table_index % 2) > 0) { + ret = get_raw_ingress_record(hw, packed_record, 6, 0, + ROWOFFSET_INGRESSPRECTLFRECORD + + table_index - 1); + if (unlikely(ret)) + return ret; + } + + ret = get_raw_ingress_record(hw, packed_record, 6, 0, + ROWOFFSET_INGRESSPRECTLFRECORD + + table_index); + if (unlikely(ret)) + return ret; + + rec->sa_da[0] = (rec->sa_da[0] & 0xFFFF0000) | + (((packed_record[0] >> 0) & 0xFFFF) << 0); + rec->sa_da[0] = (rec->sa_da[0] & 0x0000FFFF) | + (((packed_record[1] >> 0) & 0xFFFF) << 16); + + rec->sa_da[1] = (rec->sa_da[1] & 0xFFFF0000) | + (((packed_record[2] >> 0) & 0xFFFF) << 0); + + rec->eth_type = (rec->eth_type & 0xFFFF0000) | + (((packed_record[3] >> 0) & 0xFFFF) << 0); + + rec->match_mask = (rec->match_mask & 0xFFFF0000) | + (((packed_record[4] >> 0) & 0xFFFF) << 0); + + rec->match_type = (rec->match_type & 0xFFFFFFF0) | + (((packed_record[5] >> 0) & 0xF) << 0); + + rec->action = (rec->action & 0xFFFFFFFE) | + (((packed_record[5] >> 4) & 0x1) << 0); + + return 0; +} + +int aq_mss_get_ingress_prectlf_record(struct aq_hw_s *hw, + struct aq_mss_ingress_prectlf_record *rec, + u16 table_index) +{ + memset(rec, 0, sizeof(*rec)); + + return AQ_API_CALL_SAFE(get_ingress_prectlf_record, hw, rec, + table_index); +} + +static int +set_ingress_preclass_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_preclass_record *rec, + u16 table_index) +{ + u16 packed_record[20]; + + if (table_index >= NUMROWS_INGRESSPRECLASSRECORD) + return -EINVAL; + + memset(packed_record, 0, sizeof(u16) * 20); + + packed_record[0] = (packed_record[0] & 0x0000) | + (((rec->sci[0] >> 0) & 0xFFFF) << 0); + packed_record[1] = (packed_record[1] & 0x0000) | + (((rec->sci[0] >> 16) & 0xFFFF) << 0); + + packed_record[2] = (packed_record[2] & 0x0000) | + (((rec->sci[1] >> 0) & 0xFFFF) << 0); + packed_record[3] = (packed_record[3] & 0x0000) | + (((rec->sci[1] >> 16) & 0xFFFF) << 0); + + packed_record[4] = + (packed_record[4] & 0xFF00) | (((rec->tci >> 0) & 0xFF) << 0); + + packed_record[4] = (packed_record[4] & 0x00FF) | + (((rec->encr_offset >> 0) & 0xFF) << 8); + + packed_record[5] = (packed_record[5] & 0x0000) | + (((rec->eth_type >> 0) & 0xFFFF) << 0); + + packed_record[6] = (packed_record[6] & 0x0000) | + (((rec->snap[0] >> 0) & 0xFFFF) << 0); + packed_record[7] = (packed_record[7] & 0x0000) | + (((rec->snap[0] >> 16) & 0xFFFF) << 0); + + packed_record[8] = (packed_record[8] & 0xFF00) | + (((rec->snap[1] >> 0) & 0xFF) << 0); + + packed_record[8] = + (packed_record[8] & 0x00FF) | (((rec->llc >> 0) & 0xFF) << 8); + packed_record[9] = + (packed_record[9] & 0x0000) | (((rec->llc >> 8) & 0xFFFF) << 0); + + packed_record[10] = (packed_record[10] & 0x0000) | + (((rec->mac_sa[0] >> 0) & 0xFFFF) << 0); + packed_record[11] = (packed_record[11] & 0x0000) | + (((rec->mac_sa[0] >> 16) & 0xFFFF) << 0); + + packed_record[12] = (packed_record[12] & 0x0000) | + (((rec->mac_sa[1] >> 0) & 0xFFFF) << 0); + + packed_record[13] = (packed_record[13] & 0x0000) | + (((rec->mac_da[0] >> 0) & 0xFFFF) << 0); + packed_record[14] = (packed_record[14] & 0x0000) | + (((rec->mac_da[0] >> 16) & 0xFFFF) << 0); + + packed_record[15] = (packed_record[15] & 0x0000) | + (((rec->mac_da[1] >> 0) & 0xFFFF) << 0); + + packed_record[16] = (packed_record[16] & 0xFFFE) | + (((rec->lpbk_packet >> 0) & 0x1) << 0); + + packed_record[16] = (packed_record[16] & 0xFFF9) | + (((rec->an_mask >> 0) & 0x3) << 1); + + packed_record[16] = (packed_record[16] & 0xFE07) | + (((rec->tci_mask >> 0) & 0x3F) << 3); + + packed_record[16] = (packed_record[16] & 0x01FF) | + (((rec->sci_mask >> 0) & 0x7F) << 9); + packed_record[17] = (packed_record[17] & 0xFFFE) | + (((rec->sci_mask >> 7) & 0x1) << 0); + + packed_record[17] = (packed_record[17] & 0xFFF9) | + (((rec->eth_type_mask >> 0) & 0x3) << 1); + + packed_record[17] = (packed_record[17] & 0xFF07) | + (((rec->snap_mask >> 0) & 0x1F) << 3); + + packed_record[17] = (packed_record[17] & 0xF8FF) | + (((rec->llc_mask >> 0) & 0x7) << 8); + + packed_record[17] = (packed_record[17] & 0xF7FF) | + (((rec->_802_2_encapsulate >> 0) & 0x1) << 11); + + packed_record[17] = (packed_record[17] & 0x0FFF) | + (((rec->sa_mask >> 0) & 0xF) << 12); + packed_record[18] = (packed_record[18] & 0xFFFC) | + (((rec->sa_mask >> 4) & 0x3) << 0); + + packed_record[18] = (packed_record[18] & 0xFF03) | + (((rec->da_mask >> 0) & 0x3F) << 2); + + packed_record[18] = (packed_record[18] & 0xFEFF) | + (((rec->lpbk_mask >> 0) & 0x1) << 8); + + packed_record[18] = (packed_record[18] & 0xC1FF) | + (((rec->sc_idx >> 0) & 0x1F) << 9); + + packed_record[18] = (packed_record[18] & 0xBFFF) | + (((rec->proc_dest >> 0) & 0x1) << 14); + + packed_record[18] = (packed_record[18] & 0x7FFF) | + (((rec->action >> 0) & 0x1) << 15); + packed_record[19] = (packed_record[19] & 0xFFFE) | + (((rec->action >> 1) & 0x1) << 0); + + packed_record[19] = (packed_record[19] & 0xFFFD) | + (((rec->ctrl_unctrl >> 0) & 0x1) << 1); + + packed_record[19] = (packed_record[19] & 0xFFFB) | + (((rec->sci_from_table >> 0) & 0x1) << 2); + + packed_record[19] = (packed_record[19] & 0xFF87) | + (((rec->reserved >> 0) & 0xF) << 3); + + packed_record[19] = + (packed_record[19] & 0xFF7F) | (((rec->valid >> 0) & 0x1) << 7); + + return set_raw_ingress_record(hw, packed_record, 20, 1, + ROWOFFSET_INGRESSPRECLASSRECORD + + table_index); +} + +int aq_mss_set_ingress_preclass_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_preclass_record *rec, + u16 table_index) +{ + return AQ_API_CALL_SAFE(set_ingress_preclass_record, hw, rec, + table_index); +} + +static int +get_ingress_preclass_record(struct aq_hw_s *hw, + struct aq_mss_ingress_preclass_record *rec, + u16 table_index) +{ + u16 packed_record[20]; + int ret; + + if (table_index >= NUMROWS_INGRESSPRECLASSRECORD) + return -EINVAL; + + /* If the row that we want to read is odd, first read the previous even + * row, throw that value away, and finally read the desired row. + */ + if ((table_index % 2) > 0) { + ret = get_raw_ingress_record(hw, packed_record, 20, 1, + ROWOFFSET_INGRESSPRECLASSRECORD + + table_index - 1); + if (unlikely(ret)) + return ret; + } + + ret = get_raw_ingress_record(hw, packed_record, 20, 1, + ROWOFFSET_INGRESSPRECLASSRECORD + + table_index); + if (unlikely(ret)) + return ret; + + rec->sci[0] = (rec->sci[0] & 0xFFFF0000) | + (((packed_record[0] >> 0) & 0xFFFF) << 0); + rec->sci[0] = (rec->sci[0] & 0x0000FFFF) | + (((packed_record[1] >> 0) & 0xFFFF) << 16); + + rec->sci[1] = (rec->sci[1] & 0xFFFF0000) | + (((packed_record[2] >> 0) & 0xFFFF) << 0); + rec->sci[1] = (rec->sci[1] & 0x0000FFFF) | + (((packed_record[3] >> 0) & 0xFFFF) << 16); + + rec->tci = (rec->tci & 0xFFFFFF00) | + (((packed_record[4] >> 0) & 0xFF) << 0); + + rec->encr_offset = (rec->encr_offset & 0xFFFFFF00) | + (((packed_record[4] >> 8) & 0xFF) << 0); + + rec->eth_type = (rec->eth_type & 0xFFFF0000) | + (((packed_record[5] >> 0) & 0xFFFF) << 0); + + rec->snap[0] = (rec->snap[0] & 0xFFFF0000) | + (((packed_record[6] >> 0) & 0xFFFF) << 0); + rec->snap[0] = (rec->snap[0] & 0x0000FFFF) | + (((packed_record[7] >> 0) & 0xFFFF) << 16); + + rec->snap[1] = (rec->snap[1] & 0xFFFFFF00) | + (((packed_record[8] >> 0) & 0xFF) << 0); + + rec->llc = (rec->llc & 0xFFFFFF00) | + (((packed_record[8] >> 8) & 0xFF) << 0); + rec->llc = (rec->llc & 0xFF0000FF) | + (((packed_record[9] >> 0) & 0xFFFF) << 8); + + rec->mac_sa[0] = (rec->mac_sa[0] & 0xFFFF0000) | + (((packed_record[10] >> 0) & 0xFFFF) << 0); + rec->mac_sa[0] = (rec->mac_sa[0] & 0x0000FFFF) | + (((packed_record[11] >> 0) & 0xFFFF) << 16); + + rec->mac_sa[1] = (rec->mac_sa[1] & 0xFFFF0000) | + (((packed_record[12] >> 0) & 0xFFFF) << 0); + + rec->mac_da[0] = (rec->mac_da[0] & 0xFFFF0000) | + (((packed_record[13] >> 0) & 0xFFFF) << 0); + rec->mac_da[0] = (rec->mac_da[0] & 0x0000FFFF) | + (((packed_record[14] >> 0) & 0xFFFF) << 16); + + rec->mac_da[1] = (rec->mac_da[1] & 0xFFFF0000) | + (((packed_record[15] >> 0) & 0xFFFF) << 0); + + rec->lpbk_packet = (rec->lpbk_packet & 0xFFFFFFFE) | + (((packed_record[16] >> 0) & 0x1) << 0); + + rec->an_mask = (rec->an_mask & 0xFFFFFFFC) | + (((packed_record[16] >> 1) & 0x3) << 0); + + rec->tci_mask = (rec->tci_mask & 0xFFFFFFC0) | + (((packed_record[16] >> 3) & 0x3F) << 0); + + rec->sci_mask = (rec->sci_mask & 0xFFFFFF80) | + (((packed_record[16] >> 9) & 0x7F) << 0); + rec->sci_mask = (rec->sci_mask & 0xFFFFFF7F) | + (((packed_record[17] >> 0) & 0x1) << 7); + + rec->eth_type_mask = (rec->eth_type_mask & 0xFFFFFFFC) | + (((packed_record[17] >> 1) & 0x3) << 0); + + rec->snap_mask = (rec->snap_mask & 0xFFFFFFE0) | + (((packed_record[17] >> 3) & 0x1F) << 0); + + rec->llc_mask = (rec->llc_mask & 0xFFFFFFF8) | + (((packed_record[17] >> 8) & 0x7) << 0); + + rec->_802_2_encapsulate = (rec->_802_2_encapsulate & 0xFFFFFFFE) | + (((packed_record[17] >> 11) & 0x1) << 0); + + rec->sa_mask = (rec->sa_mask & 0xFFFFFFF0) | + (((packed_record[17] >> 12) & 0xF) << 0); + rec->sa_mask = (rec->sa_mask & 0xFFFFFFCF) | + (((packed_record[18] >> 0) & 0x3) << 4); + + rec->da_mask = (rec->da_mask & 0xFFFFFFC0) | + (((packed_record[18] >> 2) & 0x3F) << 0); + + rec->lpbk_mask = (rec->lpbk_mask & 0xFFFFFFFE) | + (((packed_record[18] >> 8) & 0x1) << 0); + + rec->sc_idx = (rec->sc_idx & 0xFFFFFFE0) | + (((packed_record[18] >> 9) & 0x1F) << 0); + + rec->proc_dest = (rec->proc_dest & 0xFFFFFFFE) | + (((packed_record[18] >> 14) & 0x1) << 0); + + rec->action = (rec->action & 0xFFFFFFFE) | + (((packed_record[18] >> 15) & 0x1) << 0); + rec->action = (rec->action & 0xFFFFFFFD) | + (((packed_record[19] >> 0) & 0x1) << 1); + + rec->ctrl_unctrl = (rec->ctrl_unctrl & 0xFFFFFFFE) | + (((packed_record[19] >> 1) & 0x1) << 0); + + rec->sci_from_table = (rec->sci_from_table & 0xFFFFFFFE) | + (((packed_record[19] >> 2) & 0x1) << 0); + + rec->reserved = (rec->reserved & 0xFFFFFFF0) | + (((packed_record[19] >> 3) & 0xF) << 0); + + rec->valid = (rec->valid & 0xFFFFFFFE) | + (((packed_record[19] >> 7) & 0x1) << 0); + + return 0; +} + +int aq_mss_get_ingress_preclass_record(struct aq_hw_s *hw, + struct aq_mss_ingress_preclass_record *rec, + u16 table_index) +{ + memset(rec, 0, sizeof(*rec)); + + return AQ_API_CALL_SAFE(get_ingress_preclass_record, hw, rec, + table_index); +} + +static int set_ingress_sc_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_sc_record *rec, + u16 table_index) +{ + u16 packed_record[8]; + + if (table_index >= NUMROWS_INGRESSSCRECORD) + return -EINVAL; + + memset(packed_record, 0, sizeof(u16) * 8); + + packed_record[0] = (packed_record[0] & 0x0000) | + (((rec->stop_time >> 0) & 0xFFFF) << 0); + packed_record[1] = (packed_record[1] & 0x0000) | + (((rec->stop_time >> 16) & 0xFFFF) << 0); + + packed_record[2] = (packed_record[2] & 0x0000) | + (((rec->start_time >> 0) & 0xFFFF) << 0); + packed_record[3] = (packed_record[3] & 0x0000) | + (((rec->start_time >> 16) & 0xFFFF) << 0); + + packed_record[4] = (packed_record[4] & 0xFFFC) | + (((rec->validate_frames >> 0) & 0x3) << 0); + + packed_record[4] = (packed_record[4] & 0xFFFB) | + (((rec->replay_protect >> 0) & 0x1) << 2); + + packed_record[4] = (packed_record[4] & 0x0007) | + (((rec->anti_replay_window >> 0) & 0x1FFF) << 3); + packed_record[5] = (packed_record[5] & 0x0000) | + (((rec->anti_replay_window >> 13) & 0xFFFF) << 0); + packed_record[6] = (packed_record[6] & 0xFFF8) | + (((rec->anti_replay_window >> 29) & 0x7) << 0); + + packed_record[6] = (packed_record[6] & 0xFFF7) | + (((rec->receiving >> 0) & 0x1) << 3); + + packed_record[6] = + (packed_record[6] & 0xFFEF) | (((rec->fresh >> 0) & 0x1) << 4); + + packed_record[6] = + (packed_record[6] & 0xFFDF) | (((rec->an_rol >> 0) & 0x1) << 5); + + packed_record[6] = (packed_record[6] & 0x003F) | + (((rec->reserved >> 0) & 0x3FF) << 6); + packed_record[7] = (packed_record[7] & 0x8000) | + (((rec->reserved >> 10) & 0x7FFF) << 0); + + packed_record[7] = + (packed_record[7] & 0x7FFF) | (((rec->valid >> 0) & 0x1) << 15); + + return set_raw_ingress_record(hw, packed_record, 8, 3, + ROWOFFSET_INGRESSSCRECORD + table_index); +} + +int aq_mss_set_ingress_sc_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_sc_record *rec, + u16 table_index) +{ + return AQ_API_CALL_SAFE(set_ingress_sc_record, hw, rec, table_index); +} + +static int get_ingress_sc_record(struct aq_hw_s *hw, + struct aq_mss_ingress_sc_record *rec, + u16 table_index) +{ + u16 packed_record[8]; + int ret; + + if (table_index >= NUMROWS_INGRESSSCRECORD) + return -EINVAL; + + ret = get_raw_ingress_record(hw, packed_record, 8, 3, + ROWOFFSET_INGRESSSCRECORD + table_index); + if (unlikely(ret)) + return ret; + + rec->stop_time = (rec->stop_time & 0xFFFF0000) | + (((packed_record[0] >> 0) & 0xFFFF) << 0); + rec->stop_time = (rec->stop_time & 0x0000FFFF) | + (((packed_record[1] >> 0) & 0xFFFF) << 16); + + rec->start_time = (rec->start_time & 0xFFFF0000) | + (((packed_record[2] >> 0) & 0xFFFF) << 0); + rec->start_time = (rec->start_time & 0x0000FFFF) | + (((packed_record[3] >> 0) & 0xFFFF) << 16); + + rec->validate_frames = (rec->validate_frames & 0xFFFFFFFC) | + (((packed_record[4] >> 0) & 0x3) << 0); + + rec->replay_protect = (rec->replay_protect & 0xFFFFFFFE) | + (((packed_record[4] >> 2) & 0x1) << 0); + + rec->anti_replay_window = (rec->anti_replay_window & 0xFFFFE000) | + (((packed_record[4] >> 3) & 0x1FFF) << 0); + rec->anti_replay_window = (rec->anti_replay_window & 0xE0001FFF) | + (((packed_record[5] >> 0) & 0xFFFF) << 13); + rec->anti_replay_window = (rec->anti_replay_window & 0x1FFFFFFF) | + (((packed_record[6] >> 0) & 0x7) << 29); + + rec->receiving = (rec->receiving & 0xFFFFFFFE) | + (((packed_record[6] >> 3) & 0x1) << 0); + + rec->fresh = (rec->fresh & 0xFFFFFFFE) | + (((packed_record[6] >> 4) & 0x1) << 0); + + rec->an_rol = (rec->an_rol & 0xFFFFFFFE) | + (((packed_record[6] >> 5) & 0x1) << 0); + + rec->reserved = (rec->reserved & 0xFFFFFC00) | + (((packed_record[6] >> 6) & 0x3FF) << 0); + rec->reserved = (rec->reserved & 0xFE0003FF) | + (((packed_record[7] >> 0) & 0x7FFF) << 10); + + rec->valid = (rec->valid & 0xFFFFFFFE) | + (((packed_record[7] >> 15) & 0x1) << 0); + + return 0; +} + +int aq_mss_get_ingress_sc_record(struct aq_hw_s *hw, + struct aq_mss_ingress_sc_record *rec, + u16 table_index) +{ + memset(rec, 0, sizeof(*rec)); + + return AQ_API_CALL_SAFE(get_ingress_sc_record, hw, rec, table_index); +} + +static int set_ingress_sa_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_sa_record *rec, + u16 table_index) +{ + u16 packed_record[8]; + + if (table_index >= NUMROWS_INGRESSSARECORD) + return -EINVAL; + + memset(packed_record, 0, sizeof(u16) * 8); + + packed_record[0] = (packed_record[0] & 0x0000) | + (((rec->stop_time >> 0) & 0xFFFF) << 0); + packed_record[1] = (packed_record[1] & 0x0000) | + (((rec->stop_time >> 16) & 0xFFFF) << 0); + + packed_record[2] = (packed_record[2] & 0x0000) | + (((rec->start_time >> 0) & 0xFFFF) << 0); + packed_record[3] = (packed_record[3] & 0x0000) | + (((rec->start_time >> 16) & 0xFFFF) << 0); + + packed_record[4] = (packed_record[4] & 0x0000) | + (((rec->next_pn >> 0) & 0xFFFF) << 0); + packed_record[5] = (packed_record[5] & 0x0000) | + (((rec->next_pn >> 16) & 0xFFFF) << 0); + + packed_record[6] = (packed_record[6] & 0xFFFE) | + (((rec->sat_nextpn >> 0) & 0x1) << 0); + + packed_record[6] = + (packed_record[6] & 0xFFFD) | (((rec->in_use >> 0) & 0x1) << 1); + + packed_record[6] = + (packed_record[6] & 0xFFFB) | (((rec->fresh >> 0) & 0x1) << 2); + + packed_record[6] = (packed_record[6] & 0x0007) | + (((rec->reserved >> 0) & 0x1FFF) << 3); + packed_record[7] = (packed_record[7] & 0x8000) | + (((rec->reserved >> 13) & 0x7FFF) << 0); + + packed_record[7] = + (packed_record[7] & 0x7FFF) | (((rec->valid >> 0) & 0x1) << 15); + + return set_raw_ingress_record(hw, packed_record, 8, 3, + ROWOFFSET_INGRESSSARECORD + table_index); +} + +int aq_mss_set_ingress_sa_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_sa_record *rec, + u16 table_index) +{ + return AQ_API_CALL_SAFE(set_ingress_sa_record, hw, rec, table_index); +} + +static int get_ingress_sa_record(struct aq_hw_s *hw, + struct aq_mss_ingress_sa_record *rec, + u16 table_index) +{ + u16 packed_record[8]; + int ret; + + if (table_index >= NUMROWS_INGRESSSARECORD) + return -EINVAL; + + ret = get_raw_ingress_record(hw, packed_record, 8, 3, + ROWOFFSET_INGRESSSARECORD + table_index); + if (unlikely(ret)) + return ret; + + rec->stop_time = (rec->stop_time & 0xFFFF0000) | + (((packed_record[0] >> 0) & 0xFFFF) << 0); + rec->stop_time = (rec->stop_time & 0x0000FFFF) | + (((packed_record[1] >> 0) & 0xFFFF) << 16); + + rec->start_time = (rec->start_time & 0xFFFF0000) | + (((packed_record[2] >> 0) & 0xFFFF) << 0); + rec->start_time = (rec->start_time & 0x0000FFFF) | + (((packed_record[3] >> 0) & 0xFFFF) << 16); + + rec->next_pn = (rec->next_pn & 0xFFFF0000) | + (((packed_record[4] >> 0) & 0xFFFF) << 0); + rec->next_pn = (rec->next_pn & 0x0000FFFF) | + (((packed_record[5] >> 0) & 0xFFFF) << 16); + + rec->sat_nextpn = (rec->sat_nextpn & 0xFFFFFFFE) | + (((packed_record[6] >> 0) & 0x1) << 0); + + rec->in_use = (rec->in_use & 0xFFFFFFFE) | + (((packed_record[6] >> 1) & 0x1) << 0); + + rec->fresh = (rec->fresh & 0xFFFFFFFE) | + (((packed_record[6] >> 2) & 0x1) << 0); + + rec->reserved = (rec->reserved & 0xFFFFE000) | + (((packed_record[6] >> 3) & 0x1FFF) << 0); + rec->reserved = (rec->reserved & 0xF0001FFF) | + (((packed_record[7] >> 0) & 0x7FFF) << 13); + + rec->valid = (rec->valid & 0xFFFFFFFE) | + (((packed_record[7] >> 15) & 0x1) << 0); + + return 0; +} + +int aq_mss_get_ingress_sa_record(struct aq_hw_s *hw, + struct aq_mss_ingress_sa_record *rec, + u16 table_index) +{ + memset(rec, 0, sizeof(*rec)); + + return AQ_API_CALL_SAFE(get_ingress_sa_record, hw, rec, table_index); +} + +static int +set_ingress_sakey_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_sakey_record *rec, + u16 table_index) +{ + u16 packed_record[18]; + + if (table_index >= NUMROWS_INGRESSSAKEYRECORD) + return -EINVAL; + + memset(packed_record, 0, sizeof(u16) * 18); + + packed_record[0] = (packed_record[0] & 0x0000) | + (((rec->key[0] >> 0) & 0xFFFF) << 0); + packed_record[1] = (packed_record[1] & 0x0000) | + (((rec->key[0] >> 16) & 0xFFFF) << 0); + + packed_record[2] = (packed_record[2] & 0x0000) | + (((rec->key[1] >> 0) & 0xFFFF) << 0); + packed_record[3] = (packed_record[3] & 0x0000) | + (((rec->key[1] >> 16) & 0xFFFF) << 0); + + packed_record[4] = (packed_record[4] & 0x0000) | + (((rec->key[2] >> 0) & 0xFFFF) << 0); + packed_record[5] = (packed_record[5] & 0x0000) | + (((rec->key[2] >> 16) & 0xFFFF) << 0); + + packed_record[6] = (packed_record[6] & 0x0000) | + (((rec->key[3] >> 0) & 0xFFFF) << 0); + packed_record[7] = (packed_record[7] & 0x0000) | + (((rec->key[3] >> 16) & 0xFFFF) << 0); + + packed_record[8] = (packed_record[8] & 0x0000) | + (((rec->key[4] >> 0) & 0xFFFF) << 0); + packed_record[9] = (packed_record[9] & 0x0000) | + (((rec->key[4] >> 16) & 0xFFFF) << 0); + + packed_record[10] = (packed_record[10] & 0x0000) | + (((rec->key[5] >> 0) & 0xFFFF) << 0); + packed_record[11] = (packed_record[11] & 0x0000) | + (((rec->key[5] >> 16) & 0xFFFF) << 0); + + packed_record[12] = (packed_record[12] & 0x0000) | + (((rec->key[6] >> 0) & 0xFFFF) << 0); + packed_record[13] = (packed_record[13] & 0x0000) | + (((rec->key[6] >> 16) & 0xFFFF) << 0); + + packed_record[14] = (packed_record[14] & 0x0000) | + (((rec->key[7] >> 0) & 0xFFFF) << 0); + packed_record[15] = (packed_record[15] & 0x0000) | + (((rec->key[7] >> 16) & 0xFFFF) << 0); + + packed_record[16] = (packed_record[16] & 0xFFFC) | + (((rec->key_len >> 0) & 0x3) << 0); + + return set_raw_ingress_record(hw, packed_record, 18, 2, + ROWOFFSET_INGRESSSAKEYRECORD + + table_index); +} + +int aq_mss_set_ingress_sakey_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_sakey_record *rec, + u16 table_index) +{ + return AQ_API_CALL_SAFE(set_ingress_sakey_record, hw, rec, table_index); +} + +static int get_ingress_sakey_record(struct aq_hw_s *hw, + struct aq_mss_ingress_sakey_record *rec, + u16 table_index) +{ + u16 packed_record[18]; + int ret; + + if (table_index >= NUMROWS_INGRESSSAKEYRECORD) + return -EINVAL; + + ret = get_raw_ingress_record(hw, packed_record, 18, 2, + ROWOFFSET_INGRESSSAKEYRECORD + + table_index); + if (unlikely(ret)) + return ret; + + rec->key[0] = (rec->key[0] & 0xFFFF0000) | + (((packed_record[0] >> 0) & 0xFFFF) << 0); + rec->key[0] = (rec->key[0] & 0x0000FFFF) | + (((packed_record[1] >> 0) & 0xFFFF) << 16); + + rec->key[1] = (rec->key[1] & 0xFFFF0000) | + (((packed_record[2] >> 0) & 0xFFFF) << 0); + rec->key[1] = (rec->key[1] & 0x0000FFFF) | + (((packed_record[3] >> 0) & 0xFFFF) << 16); + + rec->key[2] = (rec->key[2] & 0xFFFF0000) | + (((packed_record[4] >> 0) & 0xFFFF) << 0); + rec->key[2] = (rec->key[2] & 0x0000FFFF) | + (((packed_record[5] >> 0) & 0xFFFF) << 16); + + rec->key[3] = (rec->key[3] & 0xFFFF0000) | + (((packed_record[6] >> 0) & 0xFFFF) << 0); + rec->key[3] = (rec->key[3] & 0x0000FFFF) | + (((packed_record[7] >> 0) & 0xFFFF) << 16); + + rec->key[4] = (rec->key[4] & 0xFFFF0000) | + (((packed_record[8] >> 0) & 0xFFFF) << 0); + rec->key[4] = (rec->key[4] & 0x0000FFFF) | + (((packed_record[9] >> 0) & 0xFFFF) << 16); + + rec->key[5] = (rec->key[5] & 0xFFFF0000) | + (((packed_record[10] >> 0) & 0xFFFF) << 0); + rec->key[5] = (rec->key[5] & 0x0000FFFF) | + (((packed_record[11] >> 0) & 0xFFFF) << 16); + + rec->key[6] = (rec->key[6] & 0xFFFF0000) | + (((packed_record[12] >> 0) & 0xFFFF) << 0); + rec->key[6] = (rec->key[6] & 0x0000FFFF) | + (((packed_record[13] >> 0) & 0xFFFF) << 16); + + rec->key[7] = (rec->key[7] & 0xFFFF0000) | + (((packed_record[14] >> 0) & 0xFFFF) << 0); + rec->key[7] = (rec->key[7] & 0x0000FFFF) | + (((packed_record[15] >> 0) & 0xFFFF) << 16); + + rec->key_len = (rec->key_len & 0xFFFFFFFC) | + (((packed_record[16] >> 0) & 0x3) << 0); + + return 0; +} + +int aq_mss_get_ingress_sakey_record(struct aq_hw_s *hw, + struct aq_mss_ingress_sakey_record *rec, + u16 table_index) +{ + memset(rec, 0, sizeof(*rec)); + + return AQ_API_CALL_SAFE(get_ingress_sakey_record, hw, rec, table_index); +} + +static int +set_ingress_postclass_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_postclass_record *rec, + u16 table_index) +{ + u16 packed_record[8]; + + if (table_index >= NUMROWS_INGRESSPOSTCLASSRECORD) + return -EINVAL; + + memset(packed_record, 0, sizeof(u16) * 8); + + packed_record[0] = + (packed_record[0] & 0xFF00) | (((rec->byte0 >> 0) & 0xFF) << 0); + + packed_record[0] = + (packed_record[0] & 0x00FF) | (((rec->byte1 >> 0) & 0xFF) << 8); + + packed_record[1] = + (packed_record[1] & 0xFF00) | (((rec->byte2 >> 0) & 0xFF) << 0); + + packed_record[1] = + (packed_record[1] & 0x00FF) | (((rec->byte3 >> 0) & 0xFF) << 8); + + packed_record[2] = (packed_record[2] & 0x0000) | + (((rec->eth_type >> 0) & 0xFFFF) << 0); + + packed_record[3] = (packed_record[3] & 0xFFFE) | + (((rec->eth_type_valid >> 0) & 0x1) << 0); + + packed_record[3] = (packed_record[3] & 0xE001) | + (((rec->vlan_id >> 0) & 0xFFF) << 1); + + packed_record[3] = (packed_record[3] & 0x1FFF) | + (((rec->vlan_up >> 0) & 0x7) << 13); + + packed_record[4] = (packed_record[4] & 0xFFFE) | + (((rec->vlan_valid >> 0) & 0x1) << 0); + + packed_record[4] = + (packed_record[4] & 0xFFC1) | (((rec->sai >> 0) & 0x1F) << 1); + + packed_record[4] = (packed_record[4] & 0xFFBF) | + (((rec->sai_hit >> 0) & 0x1) << 6); + + packed_record[4] = (packed_record[4] & 0xF87F) | + (((rec->eth_type_mask >> 0) & 0xF) << 7); + + packed_record[4] = (packed_record[4] & 0x07FF) | + (((rec->byte3_location >> 0) & 0x1F) << 11); + packed_record[5] = (packed_record[5] & 0xFFFE) | + (((rec->byte3_location >> 5) & 0x1) << 0); + + packed_record[5] = (packed_record[5] & 0xFFF9) | + (((rec->byte3_mask >> 0) & 0x3) << 1); + + packed_record[5] = (packed_record[5] & 0xFE07) | + (((rec->byte2_location >> 0) & 0x3F) << 3); + + packed_record[5] = (packed_record[5] & 0xF9FF) | + (((rec->byte2_mask >> 0) & 0x3) << 9); + + packed_record[5] = (packed_record[5] & 0x07FF) | + (((rec->byte1_location >> 0) & 0x1F) << 11); + packed_record[6] = (packed_record[6] & 0xFFFE) | + (((rec->byte1_location >> 5) & 0x1) << 0); + + packed_record[6] = (packed_record[6] & 0xFFF9) | + (((rec->byte1_mask >> 0) & 0x3) << 1); + + packed_record[6] = (packed_record[6] & 0xFE07) | + (((rec->byte0_location >> 0) & 0x3F) << 3); + + packed_record[6] = (packed_record[6] & 0xF9FF) | + (((rec->byte0_mask >> 0) & 0x3) << 9); + + packed_record[6] = (packed_record[6] & 0xE7FF) | + (((rec->eth_type_valid_mask >> 0) & 0x3) << 11); + + packed_record[6] = (packed_record[6] & 0x1FFF) | + (((rec->vlan_id_mask >> 0) & 0x7) << 13); + packed_record[7] = (packed_record[7] & 0xFFFE) | + (((rec->vlan_id_mask >> 3) & 0x1) << 0); + + packed_record[7] = (packed_record[7] & 0xFFF9) | + (((rec->vlan_up_mask >> 0) & 0x3) << 1); + + packed_record[7] = (packed_record[7] & 0xFFE7) | + (((rec->vlan_valid_mask >> 0) & 0x3) << 3); + + packed_record[7] = (packed_record[7] & 0xFF9F) | + (((rec->sai_mask >> 0) & 0x3) << 5); + + packed_record[7] = (packed_record[7] & 0xFE7F) | + (((rec->sai_hit_mask >> 0) & 0x3) << 7); + + packed_record[7] = (packed_record[7] & 0xFDFF) | + (((rec->firstlevel_actions >> 0) & 0x1) << 9); + + packed_record[7] = (packed_record[7] & 0xFBFF) | + (((rec->secondlevel_actions >> 0) & 0x1) << 10); + + packed_record[7] = (packed_record[7] & 0x87FF) | + (((rec->reserved >> 0) & 0xF) << 11); + + packed_record[7] = + (packed_record[7] & 0x7FFF) | (((rec->valid >> 0) & 0x1) << 15); + + return set_raw_ingress_record(hw, packed_record, 8, 4, + ROWOFFSET_INGRESSPOSTCLASSRECORD + + table_index); +} + +int aq_mss_set_ingress_postclass_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_postclass_record *rec, + u16 table_index) +{ + return AQ_API_CALL_SAFE(set_ingress_postclass_record, hw, rec, + table_index); +} + +static int +get_ingress_postclass_record(struct aq_hw_s *hw, + struct aq_mss_ingress_postclass_record *rec, + u16 table_index) +{ + u16 packed_record[8]; + int ret; + + if (table_index >= NUMROWS_INGRESSPOSTCLASSRECORD) + return -EINVAL; + + /* If the row that we want to read is odd, first read the previous even + * row, throw that value away, and finally read the desired row. + */ + if ((table_index % 2) > 0) { + ret = get_raw_ingress_record(hw, packed_record, 8, 4, + ROWOFFSET_INGRESSPOSTCLASSRECORD + + table_index - 1); + if (unlikely(ret)) + return ret; + } + + ret = get_raw_ingress_record(hw, packed_record, 8, 4, + ROWOFFSET_INGRESSPOSTCLASSRECORD + + table_index); + if (unlikely(ret)) + return ret; + + rec->byte0 = (rec->byte0 & 0xFFFFFF00) | + (((packed_record[0] >> 0) & 0xFF) << 0); + + rec->byte1 = (rec->byte1 & 0xFFFFFF00) | + (((packed_record[0] >> 8) & 0xFF) << 0); + + rec->byte2 = (rec->byte2 & 0xFFFFFF00) | + (((packed_record[1] >> 0) & 0xFF) << 0); + + rec->byte3 = (rec->byte3 & 0xFFFFFF00) | + (((packed_record[1] >> 8) & 0xFF) << 0); + + rec->eth_type = (rec->eth_type & 0xFFFF0000) | + (((packed_record[2] >> 0) & 0xFFFF) << 0); + + rec->eth_type_valid = (rec->eth_type_valid & 0xFFFFFFFE) | + (((packed_record[3] >> 0) & 0x1) << 0); + + rec->vlan_id = (rec->vlan_id & 0xFFFFF000) | + (((packed_record[3] >> 1) & 0xFFF) << 0); + + rec->vlan_up = (rec->vlan_up & 0xFFFFFFF8) | + (((packed_record[3] >> 13) & 0x7) << 0); + + rec->vlan_valid = (rec->vlan_valid & 0xFFFFFFFE) | + (((packed_record[4] >> 0) & 0x1) << 0); + + rec->sai = (rec->sai & 0xFFFFFFE0) | + (((packed_record[4] >> 1) & 0x1F) << 0); + + rec->sai_hit = (rec->sai_hit & 0xFFFFFFFE) | + (((packed_record[4] >> 6) & 0x1) << 0); + + rec->eth_type_mask = (rec->eth_type_mask & 0xFFFFFFF0) | + (((packed_record[4] >> 7) & 0xF) << 0); + + rec->byte3_location = (rec->byte3_location & 0xFFFFFFE0) | + (((packed_record[4] >> 11) & 0x1F) << 0); + rec->byte3_location = (rec->byte3_location & 0xFFFFFFDF) | + (((packed_record[5] >> 0) & 0x1) << 5); + + rec->byte3_mask = (rec->byte3_mask & 0xFFFFFFFC) | + (((packed_record[5] >> 1) & 0x3) << 0); + + rec->byte2_location = (rec->byte2_location & 0xFFFFFFC0) | + (((packed_record[5] >> 3) & 0x3F) << 0); + + rec->byte2_mask = (rec->byte2_mask & 0xFFFFFFFC) | + (((packed_record[5] >> 9) & 0x3) << 0); + + rec->byte1_location = (rec->byte1_location & 0xFFFFFFE0) | + (((packed_record[5] >> 11) & 0x1F) << 0); + rec->byte1_location = (rec->byte1_location & 0xFFFFFFDF) | + (((packed_record[6] >> 0) & 0x1) << 5); + + rec->byte1_mask = (rec->byte1_mask & 0xFFFFFFFC) | + (((packed_record[6] >> 1) & 0x3) << 0); + + rec->byte0_location = (rec->byte0_location & 0xFFFFFFC0) | + (((packed_record[6] >> 3) & 0x3F) << 0); + + rec->byte0_mask = (rec->byte0_mask & 0xFFFFFFFC) | + (((packed_record[6] >> 9) & 0x3) << 0); + + rec->eth_type_valid_mask = (rec->eth_type_valid_mask & 0xFFFFFFFC) | + (((packed_record[6] >> 11) & 0x3) << 0); + + rec->vlan_id_mask = (rec->vlan_id_mask & 0xFFFFFFF8) | + (((packed_record[6] >> 13) & 0x7) << 0); + rec->vlan_id_mask = (rec->vlan_id_mask & 0xFFFFFFF7) | + (((packed_record[7] >> 0) & 0x1) << 3); + + rec->vlan_up_mask = (rec->vlan_up_mask & 0xFFFFFFFC) | + (((packed_record[7] >> 1) & 0x3) << 0); + + rec->vlan_valid_mask = (rec->vlan_valid_mask & 0xFFFFFFFC) | + (((packed_record[7] >> 3) & 0x3) << 0); + + rec->sai_mask = (rec->sai_mask & 0xFFFFFFFC) | + (((packed_record[7] >> 5) & 0x3) << 0); + + rec->sai_hit_mask = (rec->sai_hit_mask & 0xFFFFFFFC) | + (((packed_record[7] >> 7) & 0x3) << 0); + + rec->firstlevel_actions = (rec->firstlevel_actions & 0xFFFFFFFE) | + (((packed_record[7] >> 9) & 0x1) << 0); + + rec->secondlevel_actions = (rec->secondlevel_actions & 0xFFFFFFFE) | + (((packed_record[7] >> 10) & 0x1) << 0); + + rec->reserved = (rec->reserved & 0xFFFFFFF0) | + (((packed_record[7] >> 11) & 0xF) << 0); + + rec->valid = (rec->valid & 0xFFFFFFFE) | + (((packed_record[7] >> 15) & 0x1) << 0); + + return 0; +} + +int aq_mss_get_ingress_postclass_record(struct aq_hw_s *hw, + struct aq_mss_ingress_postclass_record *rec, + u16 table_index) +{ + memset(rec, 0, sizeof(*rec)); + + return AQ_API_CALL_SAFE(get_ingress_postclass_record, hw, rec, + table_index); +} + +static int +set_ingress_postctlf_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_postctlf_record *rec, + u16 table_index) +{ + u16 packed_record[6]; + + if (table_index >= NUMROWS_INGRESSPOSTCTLFRECORD) + return -EINVAL; + + memset(packed_record, 0, sizeof(u16) * 6); + + packed_record[0] = (packed_record[0] & 0x0000) | + (((rec->sa_da[0] >> 0) & 0xFFFF) << 0); + packed_record[1] = (packed_record[1] & 0x0000) | + (((rec->sa_da[0] >> 16) & 0xFFFF) << 0); + + packed_record[2] = (packed_record[2] & 0x0000) | + (((rec->sa_da[1] >> 0) & 0xFFFF) << 0); + + packed_record[3] = (packed_record[3] & 0x0000) | + (((rec->eth_type >> 0) & 0xFFFF) << 0); + + packed_record[4] = (packed_record[4] & 0x0000) | + (((rec->match_mask >> 0) & 0xFFFF) << 0); + + packed_record[5] = (packed_record[5] & 0xFFF0) | + (((rec->match_type >> 0) & 0xF) << 0); + + packed_record[5] = + (packed_record[5] & 0xFFEF) | (((rec->action >> 0) & 0x1) << 4); + + return set_raw_ingress_record(hw, packed_record, 6, 5, + ROWOFFSET_INGRESSPOSTCTLFRECORD + + table_index); +} + +int aq_mss_set_ingress_postctlf_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_postctlf_record *rec, + u16 table_index) +{ + return AQ_API_CALL_SAFE(set_ingress_postctlf_record, hw, rec, + table_index); +} + +static int +get_ingress_postctlf_record(struct aq_hw_s *hw, + struct aq_mss_ingress_postctlf_record *rec, + u16 table_index) +{ + u16 packed_record[6]; + int ret; + + if (table_index >= NUMROWS_INGRESSPOSTCTLFRECORD) + return -EINVAL; + + /* If the row that we want to read is odd, first read the previous even + * row, throw that value away, and finally read the desired row. + */ + if ((table_index % 2) > 0) { + ret = get_raw_ingress_record(hw, packed_record, 6, 5, + ROWOFFSET_INGRESSPOSTCTLFRECORD + + table_index - 1); + if (unlikely(ret)) + return ret; + } + + ret = get_raw_ingress_record(hw, packed_record, 6, 5, + ROWOFFSET_INGRESSPOSTCTLFRECORD + + table_index); + if (unlikely(ret)) + return ret; + + rec->sa_da[0] = (rec->sa_da[0] & 0xFFFF0000) | + (((packed_record[0] >> 0) & 0xFFFF) << 0); + rec->sa_da[0] = (rec->sa_da[0] & 0x0000FFFF) | + (((packed_record[1] >> 0) & 0xFFFF) << 16); + + rec->sa_da[1] = (rec->sa_da[1] & 0xFFFF0000) | + (((packed_record[2] >> 0) & 0xFFFF) << 0); + + rec->eth_type = (rec->eth_type & 0xFFFF0000) | + (((packed_record[3] >> 0) & 0xFFFF) << 0); + + rec->match_mask = (rec->match_mask & 0xFFFF0000) | + (((packed_record[4] >> 0) & 0xFFFF) << 0); + + rec->match_type = (rec->match_type & 0xFFFFFFF0) | + (((packed_record[5] >> 0) & 0xF) << 0); + + rec->action = (rec->action & 0xFFFFFFFE) | + (((packed_record[5] >> 4) & 0x1) << 0); + + return 0; +} + +int aq_mss_get_ingress_postctlf_record(struct aq_hw_s *hw, + struct aq_mss_ingress_postctlf_record *rec, + u16 table_index) +{ + memset(rec, 0, sizeof(*rec)); + + return AQ_API_CALL_SAFE(get_ingress_postctlf_record, hw, rec, + table_index); +} + static int set_egress_ctlf_record(struct aq_hw_s *hw, const struct aq_mss_egress_ctlf_record *rec, u16 table_index) diff --git a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.h b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.h index cbc1226ae0d7..ab5415f99a32 100644 --- a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.h +++ b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.h @@ -9,6 +9,27 @@ #include "aq_hw.h" #include "macsec_struct.h" +#define NUMROWS_INGRESSPRECTLFRECORD 24 +#define ROWOFFSET_INGRESSPRECTLFRECORD 0 + +#define NUMROWS_INGRESSPRECLASSRECORD 48 +#define ROWOFFSET_INGRESSPRECLASSRECORD 0 + +#define NUMROWS_INGRESSPOSTCLASSRECORD 48 +#define ROWOFFSET_INGRESSPOSTCLASSRECORD 0 + +#define NUMROWS_INGRESSSCRECORD 32 +#define ROWOFFSET_INGRESSSCRECORD 0 + +#define NUMROWS_INGRESSSARECORD 32 +#define ROWOFFSET_INGRESSSARECORD 32 + +#define NUMROWS_INGRESSSAKEYRECORD 32 +#define ROWOFFSET_INGRESSSAKEYRECORD 0 + +#define NUMROWS_INGRESSPOSTCTLFRECORD 24 +#define ROWOFFSET_INGRESSPOSTCTLFRECORD 0 + #define NUMROWS_EGRESSCTLFRECORD 24 #define ROWOFFSET_EGRESSCTLFRECORD 0 @@ -114,6 +135,133 @@ int aq_mss_set_egress_sakey_record(struct aq_hw_s *hw, const struct aq_mss_egress_sakey_record *rec, u16 table_index); +/*! Read the raw table data from the specified row of the Ingress + * Pre-MACSec CTL Filter table, and unpack it into the fields of rec. + * rec - [OUT] The raw table row data will be unpacked into the fields of rec. + * table_index - The table row to read (max 23). + */ +int aq_mss_get_ingress_prectlf_record(struct aq_hw_s *hw, + struct aq_mss_ingress_prectlf_record *rec, + u16 table_index); + +/*! Pack the fields of rec, and write the packed data into the + * specified row of the Ingress Pre-MACSec CTL Filter table. + * rec - [IN] The bitfield values to write to the table row. + * table_index - The table row to write(max 23). + */ +int aq_mss_set_ingress_prectlf_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_prectlf_record *rec, + u16 table_index); + +/*! Read the raw table data from the specified row of the Ingress + * Pre-MACSec Packet Classifier table, and unpack it into the fields of rec. + * rec - [OUT] The raw table row data will be unpacked into the fields of rec. + * table_index - The table row to read (max 47). + */ +int aq_mss_get_ingress_preclass_record(struct aq_hw_s *hw, + struct aq_mss_ingress_preclass_record *rec, + u16 table_index); + +/*! Pack the fields of rec, and write the packed data into the + * specified row of the Ingress Pre-MACSec Packet Classifier table. + * rec - [IN] The bitfield values to write to the table row. + * table_index - The table row to write(max 47). + */ +int aq_mss_set_ingress_preclass_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_preclass_record *rec, + u16 table_index); + +/*! Read the raw table data from the specified row of the Ingress SC + * Lookup table, and unpack it into the fields of rec. + * rec - [OUT] The raw table row data will be unpacked into the fields of rec. + * table_index - The table row to read (max 31). + */ +int aq_mss_get_ingress_sc_record(struct aq_hw_s *hw, + struct aq_mss_ingress_sc_record *rec, + u16 table_index); + +/*! Pack the fields of rec, and write the packed data into the + * specified row of the Ingress SC Lookup table. + * rec - [IN] The bitfield values to write to the table row. + * table_index - The table row to write(max 31). + */ +int aq_mss_set_ingress_sc_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_sc_record *rec, + u16 table_index); + +/*! Read the raw table data from the specified row of the Ingress SA + * Lookup table, and unpack it into the fields of rec. + * rec - [OUT] The raw table row data will be unpacked into the fields of rec. + * table_index - The table row to read (max 31). + */ +int aq_mss_get_ingress_sa_record(struct aq_hw_s *hw, + struct aq_mss_ingress_sa_record *rec, + u16 table_index); + +/*! Pack the fields of rec, and write the packed data into the + * specified row of the Ingress SA Lookup table. + * rec - [IN] The bitfield values to write to the table row. + * table_index - The table row to write(max 31). + */ +int aq_mss_set_ingress_sa_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_sa_record *rec, + u16 table_index); + +/*! Read the raw table data from the specified row of the Ingress SA + * Key Lookup table, and unpack it into the fields of rec. + * rec - [OUT] The raw table row data will be unpacked into the fields of rec. + * table_index - The table row to read (max 31). + */ +int aq_mss_get_ingress_sakey_record(struct aq_hw_s *hw, + struct aq_mss_ingress_sakey_record *rec, + u16 table_index); + +/*! Pack the fields of rec, and write the packed data into the + * specified row of the Ingress SA Key Lookup table. + * rec - [IN] The bitfield values to write to the table row. + * table_index - The table row to write(max 31). + */ +int aq_mss_set_ingress_sakey_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_sakey_record *rec, + u16 table_index); + +/*! Read the raw table data from the specified row of the Ingress + * Post-MACSec Packet Classifier table, and unpack it into the + * fields of rec. + * rec - [OUT] The raw table row data will be unpacked into the fields of rec. + * table_index - The table row to read (max 48). + */ +int aq_mss_get_ingress_postclass_record(struct aq_hw_s *hw, + struct aq_mss_ingress_postclass_record *rec, + u16 table_index); + +/*! Pack the fields of rec, and write the packed data into the + * specified row of the Ingress Post-MACSec Packet Classifier table. + * rec - [IN] The bitfield values to write to the table row. + * table_index - The table row to write(max 48). + */ +int aq_mss_set_ingress_postclass_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_postclass_record *rec, + u16 table_index); + +/*! Read the raw table data from the specified row of the Ingress + * Post-MACSec CTL Filter table, and unpack it into the fields of rec. + * rec - [OUT] The raw table row data will be unpacked into the fields of rec. + * table_index - The table row to read (max 23). + */ +int aq_mss_get_ingress_postctlf_record(struct aq_hw_s *hw, + struct aq_mss_ingress_postctlf_record *rec, + u16 table_index); + +/*! Pack the fields of rec, and write the packed data into the + * specified row of the Ingress Post-MACSec CTL Filter table. + * rec - [IN] The bitfield values to write to the table row. + * table_index - The table row to write(max 23). + */ +int aq_mss_set_ingress_postctlf_record(struct aq_hw_s *hw, + const struct aq_mss_ingress_postctlf_record *rec, + u16 table_index); + /*! Get Egress SA expired. */ int aq_mss_get_egress_sa_expired(struct aq_hw_s *hw, u32 *expired); /*! Get Egress SA threshold expired. */ diff --git a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_struct.h b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_struct.h index 7232bec643db..8c38a3470518 100644 --- a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_struct.h +++ b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_struct.h @@ -314,4 +314,387 @@ struct aq_mss_egress_sakey_record { u32 key[8]; }; +/*! Represents the bitfields of a single row in the Ingress Pre-MACSec + * CTL Filter table. + */ +struct aq_mss_ingress_prectlf_record { + /*! This is used to store the 48 bit value used to compare SA, DA + * or halfDA+half SA value. + */ + u32 sa_da[2]; + /*! This is used to store the 16 bit ethertype value used for + * comparison. + */ + u32 eth_type; + /*! The match mask is per-nibble. 0 means don't care, i.e. every + * value will match successfully. The total data is 64 bit, i.e. + * 16 nibbles masks. + */ + u32 match_mask; + /*! 0: No compare, i.e. This entry is not used + * 1: compare DA only + * 2: compare SA only + * 3: compare half DA + half SA + * 4: compare ether type only + * 5: compare DA + ethertype + * 6: compare SA + ethertype + * 7: compare DA+ range. + */ + u32 match_type; + /*! 0: Bypass the remaining modules if matched. + * 1: Forward to next module for more classifications. + */ + u32 action; +}; + +/*! Represents the bitfields of a single row in the Ingress Pre-MACSec + * Packet Classifier table. + */ +struct aq_mss_ingress_preclass_record { + /*! The 64 bit SCI field used to compare with extracted value. + * Should have SCI value in case TCI[SCI_SEND] == 0. This will be + * used for ICV calculation. + */ + u32 sci[2]; + /*! The 8 bit TCI field used to compare with extracted value. */ + u32 tci; + /*! 8 bit encryption offset. */ + u32 encr_offset; + /*! The 16 bit Ethertype (in the clear) field used to compare with + * extracted value. + */ + u32 eth_type; + /*! This is to specify the 40bit SNAP header if the SNAP header's + * mask is enabled. + */ + u32 snap[2]; + /*! This is to specify the 24bit LLC header if the LLC header's + * mask is enabled. + */ + u32 llc; + /*! The 48 bit MAC_SA field used to compare with extracted value. */ + u32 mac_sa[2]; + /*! The 48 bit MAC_DA field used to compare with extracted value. */ + u32 mac_da[2]; + /*! 0: this is to compare with non-LPBK packet + * 1: this is to compare with LPBK packet. + * This value is used to compare with a controlled-tag which goes + * with the packet when looped back from Egress port. + */ + u32 lpbk_packet; + /*! The value of this bit mask will affects how the SC index and SA + * index created. + * 2'b00: 1 SC has 4 SA. + * SC index is equivalent to {SC_Index[4:2], 1'b0}. + * SA index is equivalent to {SC_Index[4:2], SECTAG's AN[1:0]} + * Here AN bits are not compared. + * 2'b10: 1 SC has 2 SA. + * SC index is equivalent to SC_Index[4:1] + * SA index is equivalent to {SC_Index[4:1], SECTAG's AN[0]} + * Compare AN[1] field only + * 2'b11: 1 SC has 1 SA. No SC entry exists for the specific SA. + * SA index is equivalent to SC_Index[4:0] + * AN[1:0] bits are compared. + * NOTE: This design is to supports different usage of AN. User + * can either ping-pong buffer 2 SA by using only the AN[0] bit. + * Or use 4 SA per SC by use AN[1:0] bits. Or even treat each SA + * as independent. i.e. AN[1:0] is just another matching pointer + * to select SA. + */ + u32 an_mask; + /*! This is bit mask to enable comparison the upper 6 bits TCI + * field, which does not include the AN field. + * 0: don't compare + * 1: enable comparison of the bits. + */ + u32 tci_mask; + /*! 0: don't care + * 1: enable comparison of SCI. + */ + u32 sci_mask; + /*! Mask is per-byte. + * 0: don't care + * 1: enable comparison of Ethertype. + */ + u32 eth_type_mask; + /*! Mask is per-byte. + * 0: don't care and no SNAP header exist. + * 1: compare the SNAP header. + * If this bit is set to 1, the extracted filed will assume the + * SNAP header exist as encapsulated in 802.3 (RFC 1042). I.E. the + * next 5 bytes after the the LLC header is SNAP header. + */ + u32 snap_mask; + /*! Mask is per-byte. + * 0: don't care and no LLC header exist. + * 1: compare the LLC header. + * If this bit is set to 1, the extracted filed will assume the + * LLC header exist as encapsulated in 802.3 (RFC 1042). I.E. the + * next three bytes after the 802.3MAC header is LLC header. + */ + u32 llc_mask; + /*! Reserved. This bit should be always 0. */ + u32 _802_2_encapsulate; + /*! Mask is per-byte. + * 0: don't care + * 1: enable comparison of MAC_SA. + */ + u32 sa_mask; + /*! Mask is per-byte. + * 0: don't care + * 1: enable comparison of MAC_DA. + */ + u32 da_mask; + /*! 0: don't care + * 1: enable checking if this is loopback packet or not. + */ + u32 lpbk_mask; + /*! If packet matches and tagged as controlled-packet. This SC/SA + * index is used for later SC and SA table lookup. + */ + u32 sc_idx; + /*! 0: the packets will be sent to MAC FIFO + * 1: The packets will be sent to Debug/Loopback FIFO. + * If the above's action is drop. This bit has no meaning. + */ + u32 proc_dest; + /*! 0: Process: Forward to next two modules for 802.1AE decryption. + * 1: Process but keep SECTAG: Forward to next two modules for + * 802.1AE decryption but keep the MACSEC header with added error + * code information. ICV will be stripped for all control packets. + * 2: Bypass: Bypass the next two decryption modules but processed + * by post-classification. + * 3: Drop: drop this packet and update counts accordingly. + */ + u32 action; + /*! 0: This is a controlled-port packet if matched. + * 1: This is an uncontrolled-port packet if matched. + */ + u32 ctrl_unctrl; + /*! Use the SCI value from the Table if 'SC' bit of the input + * packet is not present. + */ + u32 sci_from_table; + /*! Reserved. */ + u32 reserved; + /*! 0: Not valid entry. This entry is not used + * 1: valid entry. + */ + u32 valid; +}; + +/*! Represents the bitfields of a single row in the Ingress SC Lookup table. */ +struct aq_mss_ingress_sc_record { + /*! This is to specify when the SC was first used. Set by HW. */ + u32 stop_time; + /*! This is to specify when the SC was first used. Set by HW. */ + u32 start_time; + /*! 0: Strict + * 1: Check + * 2: Disabled. + */ + u32 validate_frames; + /*! 1: Replay control enabled. + * 0: replay control disabled. + */ + u32 replay_protect; + /*! This is to specify the window range for anti-replay. Default is 0. + * 0: is strict order enforcement. + */ + u32 anti_replay_window; + /*! 0: when none of the SA related to SC has inUse set. + * 1: when either of the SA related to the SC has inUse set. + * This bit is set by HW. + */ + u32 receiving; + /*! 0: when hardware processed the SC for the first time, it clears + * this bit + * 1: This bit is set by SW, when it sets up the SC. + */ + u32 fresh; + /*! 0: The AN number will not automatically roll over if Next_PN is + * saturated. + * 1: The AN number will automatically roll over if Next_PN is + * saturated. + * Rollover is valid only after expiry. Normal roll over between + * SA's should be normal process. + */ + u32 an_rol; + /*! Reserved. */ + u32 reserved; + /*! 0: Invalid SC + * 1: Valid SC. + */ + u32 valid; +}; + +/*! Represents the bitfields of a single row in the Ingress SA Lookup table. */ +struct aq_mss_ingress_sa_record { + /*! This is to specify when the SC was first used. Set by HW. */ + u32 stop_time; + /*! This is to specify when the SC was first used. Set by HW. */ + u32 start_time; + /*! This is updated by HW to store the expected NextPN number for + * anti-replay. + */ + u32 next_pn; + /*! The Next_PN number is going to wrapped around from 0XFFFF_FFFF + * to 0. set by HW. + */ + u32 sat_nextpn; + /*! 0: This SA is not yet used. + * 1: This SA is inUse. + */ + u32 in_use; + /*! 0: when hardware processed the SC for the first time, it clears + * this timer + * 1: This bit is set by SW, when it sets up the SC. + */ + u32 fresh; + /*! Reserved. */ + u32 reserved; + /*! 0: Invalid SA. + * 1: Valid SA. + */ + u32 valid; +}; + +/*! Represents the bitfields of a single row in the Ingress SA Key + * Lookup table. + */ +struct aq_mss_ingress_sakey_record { + /*! Key for AES-GCM processing. */ + u32 key[8]; + /*! AES key size + * 00 - 128bits + * 01 - 192bits + * 10 - 256bits + * 11 - reserved. + */ + u32 key_len; +}; + +/*! Represents the bitfields of a single row in the Ingress Post- + * MACSec Packet Classifier table. + */ +struct aq_mss_ingress_postclass_record { + /*! The 8 bit value used to compare with extracted value for byte 0. */ + u32 byte0; + /*! The 8 bit value used to compare with extracted value for byte 1. */ + u32 byte1; + /*! The 8 bit value used to compare with extracted value for byte 2. */ + u32 byte2; + /*! The 8 bit value used to compare with extracted value for byte 3. */ + u32 byte3; + /*! Ethertype in the packet. */ + u32 eth_type; + /*! Ether Type value > 1500 (0x5dc). */ + u32 eth_type_valid; + /*! VLAN ID after parsing. */ + u32 vlan_id; + /*! VLAN priority after parsing. */ + u32 vlan_up; + /*! Valid VLAN coding. */ + u32 vlan_valid; + /*! SA index. */ + u32 sai; + /*! SAI hit, i.e. controlled packet. */ + u32 sai_hit; + /*! Mask for payload ethertype field. */ + u32 eth_type_mask; + /*! 0~63: byte location used extracted by packets comparator, which + * can be anything from the first 64 bytes of the MAC packets. + * This byte location counted from MAC' DA address. i.e. set to 0 + * will point to byte 0 of DA address. + */ + u32 byte3_location; + /*! Mask for Byte Offset 3. */ + u32 byte3_mask; + /*! 0~63: byte location used extracted by packets comparator, which + * can be anything from the first 64 bytes of the MAC packets. + * This byte location counted from MAC' DA address. i.e. set to 0 + * will point to byte 0 of DA address. + */ + u32 byte2_location; + /*! Mask for Byte Offset 2. */ + u32 byte2_mask; + /*! 0~63: byte location used extracted by packets comparator, which + * can be anything from the first 64 bytes of the MAC packets. + * This byte location counted from MAC' DA address. i.e. set to 0 + * will point to byte 0 of DA address. + */ + u32 byte1_location; + /*! Mask for Byte Offset 1. */ + u32 byte1_mask; + /*! 0~63: byte location used extracted by packets comparator, which + * can be anything from the first 64 bytes of the MAC packets. + * This byte location counted from MAC' DA address. i.e. set to 0 + * will point to byte 0 of DA address. + */ + u32 byte0_location; + /*! Mask for Byte Offset 0. */ + u32 byte0_mask; + /*! Mask for Ethertype valid field. Indicates 802.3 vs. Other. */ + u32 eth_type_valid_mask; + /*! Mask for VLAN ID field. */ + u32 vlan_id_mask; + /*! Mask for VLAN UP field. */ + u32 vlan_up_mask; + /*! Mask for VLAN valid field. */ + u32 vlan_valid_mask; + /*! Mask for SAI. */ + u32 sai_mask; + /*! Mask for SAI_HIT. */ + u32 sai_hit_mask; + /*! Action if only first level matches and second level does not. + * 0: pass + * 1: drop (fail). + */ + u32 firstlevel_actions; + /*! Action if both first and second level matched. + * 0: pass + * 1: drop (fail). + */ + u32 secondlevel_actions; + /*! Reserved. */ + u32 reserved; + /*! 0: Not valid entry. This entry is not used + * 1: valid entry. + */ + u32 valid; +}; + +/*! Represents the bitfields of a single row in the Ingress Post- + * MACSec CTL Filter table. + */ +struct aq_mss_ingress_postctlf_record { + /*! This is used to store the 48 bit value used to compare SA, DA + * or halfDA+half SA value. + */ + u32 sa_da[2]; + /*! This is used to store the 16 bit ethertype value used for + * comparison. + */ + u32 eth_type; + /*! The match mask is per-nibble. 0 means don't care, i.e. every + * value will match successfully. The total data is 64 bit, i.e. + * 16 nibbles masks. + */ + u32 match_mask; + /*! 0: No compare, i.e. This entry is not used + * 1: compare DA only + * 2: compare SA only + * 3: compare half DA + half SA + * 4: compare ether type only + * 5: compare DA + ethertype + * 6: compare SA + ethertype + * 7: compare DA+ range. + */ + u32 match_type; + /*! 0: Bypass the remaining modules if matched. + * 1: Forward to next module for more classifications. + */ + u32 action; +}; + #endif From patchwork Mon Mar 23 13:13:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222044 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AAC7C4332B for ; Mon, 23 Mar 2020 13:15:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 29B6F20722 for ; Mon, 23 Mar 2020 13:15:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="CFcL8kbp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728470AbgCWNPT (ORCPT ); Mon, 23 Mar 2020 09:15:19 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:39982 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728465AbgCWNPT (ORCPT ); Mon, 23 Mar 2020 09:15:19 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02ND6MJ5019116; Mon, 23 Mar 2020 06:15:15 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=fBXOcb4OrsA6tcH/siHX1yhmLqvbP+2FSLNUr3gwQvc=; b=CFcL8kbp8UQPTabxi1WP8hXydQGbQ2neh46QjsHA4lZi5SuRnQcwhlIDYz47Og10wu4P 1bM0r5z+M9TB7CbIekT2lcii3aHjPzzNwFDpSOupgHrBdK62KdiS0dW21kYZjmoJ3YKV MOWvpeUbUEJxX8YSLFd3DJi8M3Wj7gQo0Mw4KaZ4omKJCJ+8hGIk84XXWiUZ75NtAfpR FuBrmh2bRS8IlAJDii+Crd+qM2qU6c+kMD6fGjdbPP5DICnQMzyxj5Iekfu9fUGQgns3 x7iXAqdJ1Yp37hzQqXjjmeS6ACnyBJzAdyyHnj4ZWZG3odZiNYYwcMuAqA1GLh9Dyrt4 6g== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2ywvkqmn55-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 23 Mar 2020 06:15:15 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 23 Mar 2020 06:15:13 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 23 Mar 2020 06:15:13 -0700 Received: from localhost.localdomain (unknown [10.9.16.91]) by maili.marvell.com (Postfix) with ESMTP id 7808D3F7041; Mon, 23 Mar 2020 06:15:11 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Igor Russkikh" Subject: [PATCH net-next 14/17] net: atlantic: MACSec ingress offload implementation Date: Mon, 23 Mar 2020 16:13:45 +0300 Message-ID: <20200323131348.340-15-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200323131348.340-1-irusskikh@marvell.com> References: <20200323131348.340-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-23_04:2020-03-21,2020-03-23 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Starovoytov This patch adds support for MACSec ingress HW offloading on Atlantic network cards. Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- .../ethernet/aquantia/atlantic/aq_macsec.c | 463 +++++++++++++++++- .../ethernet/aquantia/atlantic/aq_macsec.h | 5 + 2 files changed, 462 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c index cf5862958e92..92244184659e 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c @@ -25,6 +25,10 @@ static int aq_clear_txsc(struct aq_nic_s *nic, const int txsc_idx, enum aq_clear_type clear_type); static int aq_clear_txsa(struct aq_nic_s *nic, struct aq_macsec_txsc *aq_txsc, const int sa_num, enum aq_clear_type clear_type); +static int aq_clear_rxsc(struct aq_nic_s *nic, const int rxsc_idx, + enum aq_clear_type clear_type); +static int aq_clear_rxsa(struct aq_nic_s *nic, struct aq_macsec_rxsc *aq_rxsc, + const int sa_num, enum aq_clear_type clear_type); static int aq_clear_secy(struct aq_nic_s *nic, const struct macsec_secy *secy, enum aq_clear_type clear_type); static int aq_apply_macsec_cfg(struct aq_nic_s *nic); @@ -57,6 +61,22 @@ static int aq_get_txsc_idx_from_secy(struct aq_macsec_cfg *macsec_cfg, return -1; } +static int aq_get_rxsc_idx_from_rxsc(struct aq_macsec_cfg *macsec_cfg, + const struct macsec_rx_sc *rxsc) +{ + int i; + + if (unlikely(!rxsc)) + return -1; + + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (macsec_cfg->aq_rxsc[i].sw_rxsc == rxsc) + return i; + } + + return -1; +} + static int aq_get_txsc_idx_from_sc_idx(const enum aq_macsec_sc_sa sc_sa, const int sc_idx) { @@ -527,34 +547,393 @@ static int aq_mdo_del_txsa(struct macsec_context *ctx) return ret; } +static int aq_rxsc_validate_frames(const enum macsec_validation_type validate) +{ + switch (validate) { + case MACSEC_VALIDATE_DISABLED: + return 2; + case MACSEC_VALIDATE_CHECK: + return 1; + case MACSEC_VALIDATE_STRICT: + return 0; + default: + break; + } + + /* should never be here */ + WARN_ON(true); + return 0; +} + +static int aq_set_rxsc(struct aq_nic_s *nic, const u32 rxsc_idx) +{ + const struct aq_macsec_rxsc *aq_rxsc = + &nic->macsec_cfg->aq_rxsc[rxsc_idx]; + struct aq_mss_ingress_preclass_record pre_class_record; + const struct macsec_rx_sc *rx_sc = aq_rxsc->sw_rxsc; + const struct macsec_secy *secy = aq_rxsc->sw_secy; + const u32 hw_sc_idx = aq_rxsc->hw_sc_idx; + struct aq_mss_ingress_sc_record sc_record; + struct aq_hw_s *hw = nic->aq_hw; + __be64 nsci; + int ret = 0; + + netdev_dbg(nic->ndev, + "set rx_sc: rxsc_idx=%d, sci %#llx, hw_sc_idx=%d\n", + rxsc_idx, rx_sc->sci, hw_sc_idx); + + memset(&pre_class_record, 0, sizeof(pre_class_record)); + nsci = cpu_to_be64((__force u64)rx_sc->sci); + memcpy(pre_class_record.sci, &nsci, sizeof(nsci)); + pre_class_record.sci_mask = 0xff; + /* match all MACSEC ethertype packets */ + pre_class_record.eth_type = ETH_P_MACSEC; + pre_class_record.eth_type_mask = 0x3; + + aq_ether_addr_to_mac(pre_class_record.mac_sa, (char *)&rx_sc->sci); + pre_class_record.sa_mask = 0x3f; + + pre_class_record.an_mask = nic->macsec_cfg->sc_sa; + pre_class_record.sc_idx = hw_sc_idx; + /* strip SecTAG & forward for decryption */ + pre_class_record.action = 0x0; + pre_class_record.valid = 1; + + ret = aq_mss_set_ingress_preclass_record(hw, &pre_class_record, + 2 * rxsc_idx + 1); + if (ret) { + netdev_err(nic->ndev, + "aq_mss_set_ingress_preclass_record failed with %d\n", + ret); + return ret; + } + + /* If SCI is absent, then match by SA alone */ + pre_class_record.sci_mask = 0; + pre_class_record.sci_from_table = 1; + + ret = aq_mss_set_ingress_preclass_record(hw, &pre_class_record, + 2 * rxsc_idx); + if (ret) { + netdev_err(nic->ndev, + "aq_mss_set_ingress_preclass_record failed with %d\n", + ret); + return ret; + } + + memset(&sc_record, 0, sizeof(sc_record)); + sc_record.validate_frames = + aq_rxsc_validate_frames(secy->validate_frames); + if (secy->replay_protect) { + sc_record.replay_protect = 1; + sc_record.anti_replay_window = secy->replay_window; + } + sc_record.valid = 1; + sc_record.fresh = 1; + + ret = aq_mss_set_ingress_sc_record(hw, &sc_record, hw_sc_idx); + if (ret) { + netdev_err(nic->ndev, + "aq_mss_set_ingress_sc_record failed with %d\n", + ret); + return ret; + } + + return ret; +} + static int aq_mdo_add_rxsc(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + const u32 rxsc_idx_max = aq_sc_idx_max(cfg->sc_sa); + u32 rxsc_idx; + int ret = 0; + + if (hweight32(cfg->rxsc_idx_busy) >= rxsc_idx_max) + return -ENOSPC; + + rxsc_idx = ffz(cfg->rxsc_idx_busy); + if (rxsc_idx >= rxsc_idx_max) + return -ENOSPC; + + if (ctx->prepare) + return 0; + + cfg->aq_rxsc[rxsc_idx].hw_sc_idx = aq_to_hw_sc_idx(rxsc_idx, + cfg->sc_sa); + cfg->aq_rxsc[rxsc_idx].sw_secy = ctx->secy; + cfg->aq_rxsc[rxsc_idx].sw_rxsc = ctx->rx_sc; + netdev_dbg(nic->ndev, "add rxsc: rxsc_idx=%u, hw_sc_idx=%u, rxsc=%p\n", + rxsc_idx, cfg->aq_rxsc[rxsc_idx].hw_sc_idx, + cfg->aq_rxsc[rxsc_idx].sw_rxsc); + + if (netif_carrier_ok(nic->ndev) && netif_running(ctx->secy->netdev)) + ret = aq_set_rxsc(nic, rxsc_idx); + + if (ret < 0) + return ret; + + set_bit(rxsc_idx, &cfg->rxsc_idx_busy); + + return 0; } static int aq_mdo_upd_rxsc(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, ctx->rx_sc); + if (rxsc_idx < 0) + return -ENOENT; + + if (ctx->prepare) + return 0; + + if (netif_carrier_ok(nic->ndev) && netif_running(ctx->secy->netdev)) + ret = aq_set_rxsc(nic, rxsc_idx); + + return ret; +} + +static int aq_clear_rxsc(struct aq_nic_s *nic, const int rxsc_idx, + enum aq_clear_type clear_type) +{ + struct aq_macsec_rxsc *rx_sc = &nic->macsec_cfg->aq_rxsc[rxsc_idx]; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + int sa_num; + + for_each_set_bit (sa_num, &rx_sc->rx_sa_idx_busy, AQ_MACSEC_MAX_SA) { + ret = aq_clear_rxsa(nic, rx_sc, sa_num, clear_type); + if (ret) + return ret; + } + + if (clear_type & AQ_CLEAR_HW) { + struct aq_mss_ingress_preclass_record pre_class_record; + struct aq_mss_ingress_sc_record sc_record; + + memset(&pre_class_record, 0, sizeof(pre_class_record)); + memset(&sc_record, 0, sizeof(sc_record)); + + ret = aq_mss_set_ingress_preclass_record(hw, &pre_class_record, + 2 * rxsc_idx); + if (ret) { + netdev_err(nic->ndev, + "aq_mss_set_ingress_preclass_record failed with %d\n", + ret); + return ret; + } + + ret = aq_mss_set_ingress_preclass_record(hw, &pre_class_record, + 2 * rxsc_idx + 1); + if (ret) { + netdev_err(nic->ndev, + "aq_mss_set_ingress_preclass_record failed with %d\n", + ret); + return ret; + } + + sc_record.fresh = 1; + ret = aq_mss_set_ingress_sc_record(hw, &sc_record, + rx_sc->hw_sc_idx); + if (ret) + return ret; + } + + if (clear_type & AQ_CLEAR_SW) { + clear_bit(rxsc_idx, &nic->macsec_cfg->rxsc_idx_busy); + rx_sc->sw_secy = NULL; + rx_sc->sw_rxsc = NULL; + } + + return ret; } static int aq_mdo_del_rxsc(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + enum aq_clear_type clear_type = AQ_CLEAR_SW; + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, ctx->rx_sc); + if (rxsc_idx < 0) + return -ENOENT; + + if (ctx->prepare) + return 0; + + if (netif_carrier_ok(nic->ndev)) + clear_type = AQ_CLEAR_ALL; + + ret = aq_clear_rxsc(nic, rxsc_idx, clear_type); + + return ret; +} + +static int aq_update_rxsa(struct aq_nic_s *nic, const unsigned int sc_idx, + const struct macsec_secy *secy, + const struct macsec_rx_sa *rx_sa, + const unsigned char *key, const unsigned char an) +{ + struct aq_mss_ingress_sakey_record sa_key_record; + struct aq_mss_ingress_sa_record sa_record; + struct aq_hw_s *hw = nic->aq_hw; + const int sa_idx = sc_idx | an; + int ret = 0; + + netdev_dbg(nic->ndev, "set rx_sa %d: active=%d, next_pn=%d\n", an, + rx_sa->active, rx_sa->next_pn); + + memset(&sa_record, 0, sizeof(sa_record)); + sa_record.valid = rx_sa->active; + sa_record.fresh = 1; + sa_record.next_pn = rx_sa->next_pn; + + ret = aq_mss_set_ingress_sa_record(hw, &sa_record, sa_idx); + if (ret) { + netdev_err(nic->ndev, + "aq_mss_set_ingress_sa_record failed with %d\n", + ret); + return ret; + } + + if (!key) + return ret; + + memset(&sa_key_record, 0, sizeof(sa_key_record)); + memcpy(&sa_key_record.key, key, secy->key_len); + + switch (secy->key_len) { + case AQ_MACSEC_KEY_LEN_128_BIT: + sa_key_record.key_len = 0; + break; + case AQ_MACSEC_KEY_LEN_192_BIT: + sa_key_record.key_len = 1; + break; + case AQ_MACSEC_KEY_LEN_256_BIT: + sa_key_record.key_len = 2; + break; + default: + return -1; + } + + aq_rotate_keys(&sa_key_record.key, secy->key_len); + + ret = aq_mss_set_ingress_sakey_record(hw, &sa_key_record, sa_idx); + if (ret) + netdev_err(nic->ndev, + "aq_mss_set_ingress_sakey_record failed with %d\n", + ret); + + return ret; } static int aq_mdo_add_rxsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + const struct macsec_rx_sc *rx_sc = ctx->sa.rx_sa->sc; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + const struct macsec_secy *secy = ctx->secy; + struct aq_macsec_rxsc *aq_rxsc; + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, rx_sc); + if (rxsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + aq_rxsc = &nic->macsec_cfg->aq_rxsc[rxsc_idx]; + set_bit(ctx->sa.assoc_num, &aq_rxsc->rx_sa_idx_busy); + + memcpy(aq_rxsc->rx_sa_key[ctx->sa.assoc_num], ctx->sa.key, + secy->key_len); + + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_update_rxsa(nic, aq_rxsc->hw_sc_idx, secy, + ctx->sa.rx_sa, ctx->sa.key, + ctx->sa.assoc_num); + + return ret; } static int aq_mdo_upd_rxsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + const struct macsec_rx_sc *rx_sc = ctx->sa.rx_sa->sc; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + const struct macsec_secy *secy = ctx->secy; + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(cfg, rx_sc); + if (rxsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_update_rxsa(nic, cfg->aq_rxsc[rxsc_idx].hw_sc_idx, + secy, ctx->sa.rx_sa, NULL, + ctx->sa.assoc_num); + + return ret; +} + +static int aq_clear_rxsa(struct aq_nic_s *nic, struct aq_macsec_rxsc *aq_rxsc, + const int sa_num, enum aq_clear_type clear_type) +{ + int sa_idx = aq_rxsc->hw_sc_idx | sa_num; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + + if (clear_type & AQ_CLEAR_SW) + clear_bit(sa_num, &aq_rxsc->rx_sa_idx_busy); + + if ((clear_type & AQ_CLEAR_HW) && netif_carrier_ok(nic->ndev)) { + struct aq_mss_ingress_sakey_record sa_key_record; + struct aq_mss_ingress_sa_record sa_record; + + memset(&sa_key_record, 0, sizeof(sa_key_record)); + memset(&sa_record, 0, sizeof(sa_record)); + sa_record.fresh = 1; + ret = aq_mss_set_ingress_sa_record(hw, &sa_record, sa_idx); + if (ret) + return ret; + + return aq_mss_set_ingress_sakey_record(hw, &sa_key_record, + sa_idx); + } + + return ret; } static int aq_mdo_del_rxsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + const struct macsec_rx_sc *rx_sc = ctx->sa.rx_sa->sc; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(cfg, rx_sc); + if (rxsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + ret = aq_clear_rxsa(nic, &cfg->aq_rxsc[rxsc_idx], ctx->sa.assoc_num, + AQ_CLEAR_ALL); + + return ret; } static int apply_txsc_cfg(struct aq_nic_s *nic, const int txsc_idx) @@ -585,10 +964,40 @@ static int apply_txsc_cfg(struct aq_nic_s *nic, const int txsc_idx) return ret; } +static int apply_rxsc_cfg(struct aq_nic_s *nic, const int rxsc_idx) +{ + struct aq_macsec_rxsc *aq_rxsc = &nic->macsec_cfg->aq_rxsc[rxsc_idx]; + const struct macsec_secy *secy = aq_rxsc->sw_secy; + struct macsec_rx_sa *rx_sa; + int ret = 0; + int i; + + if (!netif_running(secy->netdev)) + return ret; + + ret = aq_set_rxsc(nic, rxsc_idx); + if (ret) + return ret; + + for (i = 0; i < MACSEC_NUM_AN; i++) { + rx_sa = rcu_dereference_bh(aq_rxsc->sw_rxsc->sa[i]); + if (rx_sa) { + ret = aq_update_rxsa(nic, aq_rxsc->hw_sc_idx, secy, + rx_sa, aq_rxsc->rx_sa_key[i], i); + if (ret) + return ret; + } + } + + return ret; +} + static int aq_clear_secy(struct aq_nic_s *nic, const struct macsec_secy *secy, enum aq_clear_type clear_type) { + struct macsec_rx_sc *rx_sc; int txsc_idx; + int rxsc_idx; int ret = 0; txsc_idx = aq_get_txsc_idx_from_secy(nic->macsec_cfg, secy); @@ -598,19 +1007,43 @@ static int aq_clear_secy(struct aq_nic_s *nic, const struct macsec_secy *secy, return ret; } + for (rx_sc = rcu_dereference_bh(secy->rx_sc); rx_sc; + rx_sc = rcu_dereference_bh(rx_sc->next)) { + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, rx_sc); + if (rxsc_idx < 0) + continue; + + ret = aq_clear_rxsc(nic, rxsc_idx, clear_type); + if (ret) + return ret; + } + return ret; } static int aq_apply_secy_cfg(struct aq_nic_s *nic, const struct macsec_secy *secy) { + struct macsec_rx_sc *rx_sc; int txsc_idx; + int rxsc_idx; int ret = 0; txsc_idx = aq_get_txsc_idx_from_secy(nic->macsec_cfg, secy); if (txsc_idx >= 0) apply_txsc_cfg(nic, txsc_idx); + for (rx_sc = rcu_dereference_bh(secy->rx_sc); rx_sc && rx_sc->active; + rx_sc = rcu_dereference_bh(rx_sc->next)) { + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, rx_sc); + if (unlikely(rxsc_idx < 0)) + continue; + + ret = apply_rxsc_cfg(nic, rxsc_idx); + if (ret) + return ret; + } + return ret; } @@ -627,6 +1060,14 @@ static int aq_apply_macsec_cfg(struct aq_nic_s *nic) } } + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (nic->macsec_cfg->rxsc_idx_busy & BIT(i)) { + ret = apply_rxsc_cfg(nic, i); + if (ret) + return ret; + } + } + return ret; } @@ -802,6 +1243,7 @@ int aq_macsec_enable(struct aq_nic_s *nic) /* Init Ethertype bypass filters */ for (index = 0; index < ARRAY_SIZE(ctl_ether_types); index++) { + struct aq_mss_ingress_prectlf_record rx_prectlf_rec; struct aq_mss_egress_ctlf_record tx_ctlf_rec; if (ctl_ether_types[index] == 0) @@ -815,6 +1257,15 @@ int aq_macsec_enable(struct aq_nic_s *nic) tbl_idx = NUMROWS_EGRESSCTLFRECORD - num_ctl_ether_types - 1; aq_mss_set_egress_ctlf_record(hw, &tx_ctlf_rec, tbl_idx); + memset(&rx_prectlf_rec, 0, sizeof(rx_prectlf_rec)); + rx_prectlf_rec.eth_type = ctl_ether_types[index]; + rx_prectlf_rec.match_type = 4; /* Match eth_type only */ + rx_prectlf_rec.match_mask = 0xf; /* match for eth_type */ + rx_prectlf_rec.action = 0; /* Bypass MACSEC modules */ + tbl_idx = + NUMROWS_INGRESSPRECTLFRECORD - num_ctl_ether_types - 1; + aq_mss_set_ingress_prectlf_record(hw, &rx_prectlf_rec, tbl_idx); + num_ctl_ether_types++; } diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h index 5ab0ee4bea73..b8485c1cb667 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h @@ -31,6 +31,11 @@ struct aq_macsec_txsc { }; struct aq_macsec_rxsc { + u32 hw_sc_idx; + unsigned long rx_sa_idx_busy; + const struct macsec_secy *sw_secy; + const struct macsec_rx_sc *sw_rxsc; + u8 rx_sa_key[MACSEC_NUM_AN][MACSEC_KEYID_LEN]; }; struct aq_macsec_cfg { From patchwork Mon Mar 23 13:13:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222042 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 175E0C54FCE for ; Mon, 23 Mar 2020 13:15:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DAC942072E for ; Mon, 23 Mar 2020 13:15:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="kNdCotXm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728496AbgCWNP0 (ORCPT ); Mon, 23 Mar 2020 09:15:26 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:12316 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728426AbgCWNP0 (ORCPT ); Mon, 23 Mar 2020 09:15:26 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02ND6OGd010599; Mon, 23 Mar 2020 06:15:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=4tRVerfTUmvExULVhvv03GDgv4tbrQtjBQEwIX/LA9Q=; b=kNdCotXmmPxnXQpd0/1/uSq9zLi1xut9+o4hG9KvhqvnsUGzrl8mWFuF2BGlBlJLCUJs YdSxBpELVtq5UT9TSyJ5fW1lesKFsXLBio+8vwRTGHD7E6XIX8hx36mmRjXloTPP8V2V NZ9Frv+PW6H6xrLQ4fm/H9KWnmLjobss+BrFjOj8VrKSdTZ2k7Km9C7KM5krJgjKkNhN qeV4M95oQyd7y+SuFmLUT/4UyMV7kd61XBtqLuUHnvQCem6XOC/qJrDmFICk8nbIYBQr lhI5XVBbdigsce0kI97RVEwqfc5k2q+mEC/LrOBivz8ycZ5I5OF1dCwDYIXMf6FRsiRH mw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2ywg9nefsm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 23 Mar 2020 06:15:22 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 23 Mar 2020 06:15:21 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 23 Mar 2020 06:15:20 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 23 Mar 2020 06:15:20 -0700 Received: from localhost.localdomain (unknown [10.9.16.91]) by maili.marvell.com (Postfix) with ESMTP id ED32C3F703F; Mon, 23 Mar 2020 06:15:18 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Igor Russkikh" Subject: [PATCH net-next 17/17] net: atlantic: add XPN handling Date: Mon, 23 Mar 2020 16:13:48 +0300 Message-ID: <20200323131348.340-18-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200323131348.340-1-irusskikh@marvell.com> References: <20200323131348.340-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-23_04:2020-03-21,2020-03-23 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Starovoytov This patch adds XPN handling. Our driver doesn't support XPN, but we should still update a couple of places in the code, because the size of 'next_pn' field has changed. Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- drivers/net/ethernet/aquantia/atlantic/aq_macsec.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c index dc1da79b8b26..bc23b8bf4a72 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c @@ -461,6 +461,9 @@ static int aq_mdo_add_secy(struct macsec_context *ctx) u32 txsc_idx; int ret = 0; + if (secy->xpn) + return -EOPNOTSUPP; + sc_sa = sc_sa_from_num_an(MACSEC_NUM_AN); if (sc_sa == aq_macsec_sa_sc_not_used) return -EINVAL; @@ -567,6 +570,7 @@ static int aq_update_txsa(struct aq_nic_s *nic, const unsigned int sc_idx, const struct macsec_tx_sa *tx_sa, const unsigned char *key, const unsigned char an) { + const u32 next_pn = tx_sa->next_pn_halves.lower; struct aq_mss_egress_sakey_record key_rec; const unsigned int sa_idx = sc_idx | an; struct aq_mss_egress_sa_record sa_rec; @@ -574,12 +578,12 @@ static int aq_update_txsa(struct aq_nic_s *nic, const unsigned int sc_idx, int ret = 0; netdev_dbg(nic->ndev, "set tx_sa %d: active=%d, next_pn=%d\n", an, - tx_sa->active, tx_sa->next_pn); + tx_sa->active, next_pn); memset(&sa_rec, 0, sizeof(sa_rec)); sa_rec.valid = tx_sa->active; sa_rec.fresh = 1; - sa_rec.next_pn = tx_sa->next_pn; + sa_rec.next_pn = next_pn; ret = aq_mss_set_egress_sa_record(hw, &sa_rec, sa_idx); if (ret) { @@ -941,18 +945,19 @@ static int aq_update_rxsa(struct aq_nic_s *nic, const unsigned int sc_idx, const unsigned char *key, const unsigned char an) { struct aq_mss_ingress_sakey_record sa_key_record; + const u32 next_pn = rx_sa->next_pn_halves.lower; struct aq_mss_ingress_sa_record sa_record; struct aq_hw_s *hw = nic->aq_hw; const int sa_idx = sc_idx | an; int ret = 0; netdev_dbg(nic->ndev, "set rx_sa %d: active=%d, next_pn=%d\n", an, - rx_sa->active, rx_sa->next_pn); + rx_sa->active, next_pn); memset(&sa_record, 0, sizeof(sa_record)); sa_record.valid = rx_sa->active; sa_record.fresh = 1; - sa_record.next_pn = rx_sa->next_pn; + sa_record.next_pn = next_pn; ret = aq_mss_set_ingress_sa_record(hw, &sa_record, sa_idx); if (ret) {