From patchwork Tue Apr 28 01:39:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 220423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC4B4C83003 for ; Tue, 28 Apr 2020 01:39:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B3C4F206D9 for ; Tue, 28 Apr 2020 01:39:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="L/maPzT5" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726335AbgD1Bjc (ORCPT ); Mon, 27 Apr 2020 21:39:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726233AbgD1Bjb (ORCPT ); Mon, 27 Apr 2020 21:39:31 -0400 Received: from mail-wm1-x341.google.com (mail-wm1-x341.google.com [IPv6:2a00:1450:4864:20::341]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D26C3C03C1A8 for ; Mon, 27 Apr 2020 18:39:30 -0700 (PDT) Received: by mail-wm1-x341.google.com with SMTP id u16so994808wmc.5 for ; Mon, 27 Apr 2020 18:39:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FNwQnh9HK2QI880PfDB10dgka+yTFCMJycK++RfJjnU=; b=L/maPzT54PWA7xdYb1gEt0aFcXN0hd+u1nyU7YjtBXhtcWH5/u05oMBnz+L7JLwPBQ pn0Q1zNFU4okm0fRyB7xaZdS2tJ3FFjSIdmojpTTggdwFer5Szh4gkUwMSAndANXp6yV y/trblM+48LBPHf9/xbIg2q0KpJ9w1fAEwktJz+eC4fOm5IOgdV9xeqlVVx9bGkBe0VO nBNRsVKxpI135/ZdgRIKRFNW2g8O38Qwcw9Uz4Ymf7GZiofhChP6VzLzOfh0rJjllaCZ vDVgkqEpwdtouvek5CxDus3zmSsDZiokgs6Nf5apITVH61TDQwaKaSYIvZurHkVEWNom i5xA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FNwQnh9HK2QI880PfDB10dgka+yTFCMJycK++RfJjnU=; b=Q1LojAu5EKaETP3hRFRrB/kNmAKDhb+xHM72X61ESUjO0WOx2QM+DRsIS/4NR1/Raa VcgC7XuYNAr04CWP9oOZ2PwcxomU3LcUW1NAdlDKjwsIfY3MO0FGYO2zK4u4tOhP1Feg rlEzJemNS1TfvJA+2L9eAO0sb7xtjCqBVINS0lP5PaOrIMYQR+M+NBQkVPDzzUtliq6B I4FiQJJnTLlqj3rQG48TGLy3Fv7eYaJRCImJlc/DWoTd/bbvy0MXfLqKR5AgdmUmqvFS aH1odquGDr27Vd4CnNX66kpufpXOigbbsBbX9oSfZKo61G/bz+G2x4KFWkbpsk40xFWH GqDA== X-Gm-Message-State: AGi0Pubd/EyEu5egZWpY8gPiIXT3IftCyzqlA0wROlmZUdQf26eeWyeK i4pBhHTNwMFR0xjfUc5q2YQ6tF2Z X-Google-Smtp-Source: APiQypIwO/iOgeCKsA9nqsj7FfRmsPmXNKFC1gbEzexZGOY4DQdgHT8K+bPXOTSgZNBdKlwIn1KG/w== X-Received: by 2002:a7b:cbc6:: with SMTP id n6mr1774364wmi.155.1588037969459; Mon, 27 Apr 2020 18:39:29 -0700 (PDT) Received: from localhost.localdomain ([86.121.118.29]) by smtp.gmail.com with ESMTPSA id u188sm1235348wmg.37.2020.04.27.18.39.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Apr 2020 18:39:29 -0700 (PDT) From: Vladimir Oltean To: netdev@vger.kernel.org Cc: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, vinicius.gomes@intel.com, po.liu@nxp.com, xiaoliang.yang@nxp.com, mingkai.hu@nxp.com, christian.herber@nxp.com, claudiu.manoil@nxp.com, vladimir.oltean@nxp.com, alexandru.marginean@nxp.com, vlad@buslov.dev, jiri@mellanox.com, idosch@mellanox.com, kuba@kernel.org Subject: [RFC PATCH 1/5] net: dsa: export dsa_slave_dev_check and dsa_slave_to_port Date: Tue, 28 Apr 2020 04:39:02 +0300 Message-Id: <20200428013906.19904-2-olteanv@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200428013906.19904-1-olteanv@gmail.com> References: <20200428013906.19904-1-olteanv@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean To be able to perform mirroring and redirection through tc-flower offloads (the implementation of which is given raw access to the flow_cls_offload structure), switch drivers need to be able to call these functions on act->dev. Signed-off-by: Vladimir Oltean --- include/net/dsa.h | 2 ++ net/dsa/dsa_priv.h | 8 -------- net/dsa/slave.c | 9 +++++++++ 3 files changed, 11 insertions(+), 8 deletions(-) diff --git a/include/net/dsa.h b/include/net/dsa.h index fb3f9222f2a1..62beaa4c234e 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -739,6 +739,8 @@ int dsa_port_get_phy_strings(struct dsa_port *dp, uint8_t *data); int dsa_port_get_ethtool_phy_stats(struct dsa_port *dp, uint64_t *data); int dsa_port_get_phy_sset_count(struct dsa_port *dp); void dsa_port_phylink_mac_change(struct dsa_switch *ds, int port, bool up); +bool dsa_slave_dev_check(const struct net_device *dev); +struct dsa_port *dsa_slave_to_port(const struct net_device *dev); struct dsa_tag_driver { const struct dsa_device_ops *ops; diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h index 6d9a1ef65fa0..32bf570fd71c 100644 --- a/net/dsa/dsa_priv.h +++ b/net/dsa/dsa_priv.h @@ -173,19 +173,11 @@ extern const struct dsa_device_ops notag_netdev_ops; void dsa_slave_mii_bus_init(struct dsa_switch *ds); int dsa_slave_create(struct dsa_port *dp); void dsa_slave_destroy(struct net_device *slave_dev); -bool dsa_slave_dev_check(const struct net_device *dev); int dsa_slave_suspend(struct net_device *slave_dev); int dsa_slave_resume(struct net_device *slave_dev); int dsa_slave_register_notifier(void); void dsa_slave_unregister_notifier(void); -static inline struct dsa_port *dsa_slave_to_port(const struct net_device *dev) -{ - struct dsa_slave_priv *p = netdev_priv(dev); - - return p->dp; -} - static inline struct net_device * dsa_slave_to_master(const struct net_device *dev) { diff --git a/net/dsa/slave.c b/net/dsa/slave.c index ba8bf90dc0cc..4eeb5b47ef99 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -62,6 +62,14 @@ static int dsa_slave_get_iflink(const struct net_device *dev) return dsa_slave_to_master(dev)->ifindex; } +struct dsa_port *dsa_slave_to_port(const struct net_device *dev) +{ + struct dsa_slave_priv *p = netdev_priv(dev); + + return p->dp; +} +EXPORT_SYMBOL_GPL(dsa_slave_to_port); + static int dsa_slave_open(struct net_device *dev) { struct net_device *master = dsa_slave_to_master(dev); @@ -1836,6 +1844,7 @@ bool dsa_slave_dev_check(const struct net_device *dev) { return dev->netdev_ops == &dsa_slave_netdev_ops; } +EXPORT_SYMBOL_GPL(dsa_slave_dev_check); static int dsa_slave_changeupper(struct net_device *dev, struct netdev_notifier_changeupper_info *info) From patchwork Tue Apr 28 01:39:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 220422 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 272EDC83002 for ; Tue, 28 Apr 2020 01:39:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E997C206D9 for ; Tue, 28 Apr 2020 01:39:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Qoh7BfUL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726364AbgD1Bji (ORCPT ); Mon, 27 Apr 2020 21:39:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726259AbgD1Bjg (ORCPT ); Mon, 27 Apr 2020 21:39:36 -0400 Received: from mail-wr1-x443.google.com (mail-wr1-x443.google.com [IPv6:2a00:1450:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE7F3C03C1A8 for ; Mon, 27 Apr 2020 18:39:35 -0700 (PDT) Received: by mail-wr1-x443.google.com with SMTP id s10so22762243wrr.0 for ; Mon, 27 Apr 2020 18:39:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pJQDwBN/eNtUaOatcImzGmMg4v8YZxzkEKl3Z+ij5js=; b=Qoh7BfULt8SKSdBZ9FIzmMpsFGEtZTAFM1jyut3sGBUpGIpvyZA5uZHIeqjR6ChF9S Bw2s5BEcpFd7TNyI1MWrOlYRvybYzW6e5hWxLZqokFw+fwgjG62oSpE4tMY0s7MK5Eoa DePefw/Y30VRL03d8IaSdiTs+qPgwTRel3TUZqzGe5t6euPikp131FB8Urk+GpbPFJT/ b+Nq6j2QueIufrKbL9S+Zpe9LJpTV/j9uL5oKgygsBMOJJOhh+ahwilzN0UYO+IHdABs w9zFRaiiy+fnrsU/p4pp3bytLxztAxV1c3jCQL6IppawTcJeUcYvoTXtE8zSd7vaMZZG nRuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pJQDwBN/eNtUaOatcImzGmMg4v8YZxzkEKl3Z+ij5js=; b=lYpiGrbLwKZKyJ7Yy9yKtoXRhZjWKYRCoHJn8biO8qoLihpvOASm8AWJTdPwSXDuVv UFCryvmakj/lMrndUP03DqR2cH1GEU8Fi2lQS46m7tfsMXp9wcIn/5S6swyZ9mid1ehM Tygr6lF31m/ePXMce7caNSJ2+Nzh+203uziA9SsZEnCYBHSmiEEwfzJx9oDZ6KEm9GJw wyfGNa6X9JLikLlMvYo/0Hy78Yuoe7DDn3FQFtda3+fABlSUPTYjjSBvfR8e2KUJugkt MMpVxS3XGovTRHYw1g74P5ElaZgokT/Y4fTxVxMhs89V6WA3mwXLgY9dqmyXTzb9bT7B jItg== X-Gm-Message-State: AGi0PubdIiR2nUhlKTskGnAcCtpxCg4ph0DPkFCHzEGEfYFsyrIHsROa d1acKYYDeURsmVuD85NpBZq5SWgg X-Google-Smtp-Source: APiQypLiEb3cQ2hSAoK5ugzBtfxVE+LLRmcJxfXMNi/K1OIknKN9pkTt0Vb4mnmvWsUI+TIZ6umQNg== X-Received: by 2002:a5d:6ccb:: with SMTP id c11mr31506890wrc.416.1588037973892; Mon, 27 Apr 2020 18:39:33 -0700 (PDT) Received: from localhost.localdomain ([86.121.118.29]) by smtp.gmail.com with ESMTPSA id u188sm1235348wmg.37.2020.04.27.18.39.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Apr 2020 18:39:33 -0700 (PDT) From: Vladimir Oltean To: netdev@vger.kernel.org Cc: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, vinicius.gomes@intel.com, po.liu@nxp.com, xiaoliang.yang@nxp.com, mingkai.hu@nxp.com, christian.herber@nxp.com, claudiu.manoil@nxp.com, vladimir.oltean@nxp.com, alexandru.marginean@nxp.com, vlad@buslov.dev, jiri@mellanox.com, idosch@mellanox.com, kuba@kernel.org Subject: [RFC PATCH 4/5] net: dsa: sja1105: support flow-based redirection via virtual links Date: Tue, 28 Apr 2020 04:39:05 +0300 Message-Id: <20200428013906.19904-5-olteanv@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200428013906.19904-1-olteanv@gmail.com> References: <20200428013906.19904-1-olteanv@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean Implement tc-flower offloads for redirect, trap and drop using non-critical virtual links. Commands which were tested to work are: # Send frames received on swp2 with a DA of 42:be:24:9b:76:20 to the # CPU and to swp3. This type of key (DA only) when the port's VLAN # awareness state is off. tc qdisc add dev swp2 clsact tc filter add dev swp2 ingress flower skip_sw dst_mac 42:be:24:9b:76:20 \ action mirred egress redirect dev swp3 \ action trap # Drop frames received on swp2 with a DA of 42:be:24:9b:76:20, a VID # of 100 and a PCP of 0. tc filter add dev swp2 ingress protocol 802.1Q flower skip_sw \ dst_mac 42:be:24:9b:76:20 vlan_id 100 vlan_prio 0 action drop Under the hood, all rules match on DMAC, VID and PCP, but when VLAN filtering is disabled, those are set internally by the driver to the port-based defaults. Because we would be put in an awkward situation if the user were to change the VLAN filtering state while there are active rules (packets would no longer match on the specified keys), we simply deny changing vlan_filtering unless the list of flows offloaded via virtual links is empty. Then the user can re-add new rules. Signed-off-by: Vladimir Oltean --- drivers/net/dsa/sja1105/Kconfig | 9 + drivers/net/dsa/sja1105/Makefile | 4 + drivers/net/dsa/sja1105/sja1105.h | 18 ++ drivers/net/dsa/sja1105/sja1105_flower.c | 57 +++- drivers/net/dsa/sja1105/sja1105_main.c | 12 +- drivers/net/dsa/sja1105/sja1105_vl.c | 302 ++++++++++++++++++ drivers/net/dsa/sja1105/sja1105_vl.h | 41 +++ .../net/ethernet/freescale/enetc/enetc_qos.c | 17 + 8 files changed, 454 insertions(+), 6 deletions(-) create mode 100644 drivers/net/dsa/sja1105/sja1105_vl.c create mode 100644 drivers/net/dsa/sja1105/sja1105_vl.h diff --git a/drivers/net/dsa/sja1105/Kconfig b/drivers/net/dsa/sja1105/Kconfig index 0fe1ae173aa1..bc59c3fbab50 100644 --- a/drivers/net/dsa/sja1105/Kconfig +++ b/drivers/net/dsa/sja1105/Kconfig @@ -33,3 +33,12 @@ config NET_DSA_SJA1105_TAS This enables support for the TTEthernet-based egress scheduling engine in the SJA1105 DSA driver, which is controlled using a hardware offload of the tc-tqprio qdisc. + +config NET_DSA_SJA1105_VL + bool "Support for Virtual Links on NXP SJA1105" + depends on NET_DSA_SJA1105_TAS + help + This enables support for flow classification using capable devices + (SJA1105T, SJA1105Q, SJA1105S). The following actions are supported: + - redirect, trap, drop + - time-based ingress policing, via the tc-gate action diff --git a/drivers/net/dsa/sja1105/Makefile b/drivers/net/dsa/sja1105/Makefile index 8943d8d66f2b..c88e56a29db8 100644 --- a/drivers/net/dsa/sja1105/Makefile +++ b/drivers/net/dsa/sja1105/Makefile @@ -17,3 +17,7 @@ endif ifdef CONFIG_NET_DSA_SJA1105_TAS sja1105-objs += sja1105_tas.o endif + +ifdef CONFIG_NET_DSA_SJA1105_VL +sja1105-objs += sja1105_vl.o +endif diff --git a/drivers/net/dsa/sja1105/sja1105.h b/drivers/net/dsa/sja1105/sja1105.h index 95633ad9bfb7..1756000f6936 100644 --- a/drivers/net/dsa/sja1105/sja1105.h +++ b/drivers/net/dsa/sja1105/sja1105.h @@ -126,6 +126,13 @@ struct sja1105_key { enum sja1105_rule_type { SJA1105_RULE_BCAST_POLICER, SJA1105_RULE_TC_POLICER, + SJA1105_RULE_VL, +}; + +enum sja1105_vl_type { + SJA1105_VL_NONCRITICAL, + SJA1105_VL_RATE_CONSTRAINED, + SJA1105_VL_TIME_TRIGGERED, }; struct sja1105_rule { @@ -135,6 +142,7 @@ struct sja1105_rule { struct sja1105_key key; enum sja1105_rule_type type; + /* Action */ union { /* SJA1105_RULE_BCAST_POLICER */ struct { @@ -145,12 +153,19 @@ struct sja1105_rule { struct { int sharindx; } tc_pol; + + /* SJA1105_RULE_VL */ + struct { + unsigned long destports; + enum sja1105_vl_type type; + } vl; }; }; struct sja1105_flow_block { struct list_head rules; bool l2_policer_used[SJA1105_NUM_L2_POLICERS]; + int num_virtual_links; }; struct sja1105_private { @@ -187,6 +202,7 @@ enum sja1105_reset_reason { SJA1105_AGEING_TIME, SJA1105_SCHEDULING, SJA1105_BEST_EFFORT_POLICING, + SJA1105_VIRTUAL_LINKS, }; int sja1105_static_config_reload(struct sja1105_private *priv, @@ -290,5 +306,7 @@ int sja1105_cls_flower_add(struct dsa_switch *ds, int port, struct flow_cls_offload *cls, bool ingress); void sja1105_flower_setup(struct dsa_switch *ds); void sja1105_flower_teardown(struct dsa_switch *ds); +struct sja1105_rule *sja1105_rule_find(struct sja1105_private *priv, + unsigned long cookie); #endif diff --git a/drivers/net/dsa/sja1105/sja1105_flower.c b/drivers/net/dsa/sja1105/sja1105_flower.c index 3246d5a49436..48d7cd8e5bef 100644 --- a/drivers/net/dsa/sja1105/sja1105_flower.c +++ b/drivers/net/dsa/sja1105/sja1105_flower.c @@ -2,9 +2,10 @@ /* Copyright 2020, NXP Semiconductors */ #include "sja1105.h" +#include "sja1105_vl.h" -static struct sja1105_rule *sja1105_rule_find(struct sja1105_private *priv, - unsigned long cookie) +struct sja1105_rule *sja1105_rule_find(struct sja1105_private *priv, + unsigned long cookie) { struct sja1105_rule *rule; @@ -173,7 +174,8 @@ static int sja1105_setup_tc_policer(struct sja1105_private *priv, static int sja1105_flower_policer(struct sja1105_private *priv, int port, struct netlink_ext_ack *extack, - unsigned long cookie, struct sja1105_key *key, + unsigned long cookie, + struct sja1105_key *key, u64 rate_bytes_per_sec, s64 burst) { @@ -308,6 +310,7 @@ int sja1105_cls_flower_add(struct dsa_switch *ds, int port, const struct flow_action_entry *act; unsigned long cookie = cls->cookie; struct sja1105_key key; + bool vl_rule = false; int rc, i; rc = sja1105_flower_parse_key(priv, extack, cls, &key); @@ -319,13 +322,50 @@ int sja1105_cls_flower_add(struct dsa_switch *ds, int port, flow_action_for_each(i, act, &rule->action) { switch (act->id) { case FLOW_ACTION_POLICE: - rc = sja1105_flower_policer(priv, port, - extack, cookie, &key, + rc = sja1105_flower_policer(priv, port, extack, cookie, + &key, act->police.rate_bytes_ps, act->police.burst); if (rc) goto out; break; + case FLOW_ACTION_TRAP: { + int cpu = dsa_upstream_port(ds, port); + + vl_rule = true; + + rc = sja1105_vl_redirect(priv, port, extack, cookie, + &key, BIT(cpu), true); + if (rc) + goto out; + break; + } + case FLOW_ACTION_REDIRECT: { + struct dsa_port *to_dp; + + if (!dsa_slave_dev_check(act->dev)) { + NL_SET_ERR_MSG_MOD(extack, + "Destination not a switch port"); + return -EOPNOTSUPP; + } + + to_dp = dsa_slave_to_port(act->dev); + vl_rule = true; + + rc = sja1105_vl_redirect(priv, port, extack, cookie, + &key, BIT(to_dp->index), true); + if (rc) + goto out; + break; + } + case FLOW_ACTION_DROP: + vl_rule = true; + + rc = sja1105_vl_redirect(priv, port, extack, cookie, + &key, 0, false); + if (rc) + goto out; + break; default: NL_SET_ERR_MSG_MOD(extack, "Action not supported"); @@ -333,6 +373,10 @@ int sja1105_cls_flower_add(struct dsa_switch *ds, int port, goto out; } } + + if (vl_rule && !rc) + rc = sja1105_static_config_reload(priv, SJA1105_VIRTUAL_LINKS); + out: return rc; } @@ -348,6 +392,9 @@ int sja1105_cls_flower_del(struct dsa_switch *ds, int port, if (!rule) return 0; + if (rule->type == SJA1105_RULE_VL) + return sja1105_vl_delete(priv, port, rule, cls->common.extack); + policing = priv->static_config.tables[BLK_IDX_L2_POLICING].entries; if (rule->type == SJA1105_RULE_BCAST_POLICER) { diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c index 472f4eb20c49..8bb104ee73d5 100644 --- a/drivers/net/dsa/sja1105/sja1105_main.c +++ b/drivers/net/dsa/sja1105/sja1105_main.c @@ -445,7 +445,7 @@ static int sja1105_init_general_params(struct sja1105_private *priv) */ .casc_port = SJA1105_NUM_PORTS, /* No TTEthernet */ - .vllupformat = 0, + .vllupformat = SJA1105_VL_FORMAT_PSFP, .vlmarker = 0, .vlmask = 0, /* Only update correctionField for 1-step PTP (L2 transport) */ @@ -1589,6 +1589,7 @@ static const char * const sja1105_reset_reasons[] = { [SJA1105_AGEING_TIME] = "Ageing time", [SJA1105_SCHEDULING] = "Time-aware scheduling", [SJA1105_BEST_EFFORT_POLICING] = "Best-effort policing", + [SJA1105_VIRTUAL_LINKS] = "Virtual links", }; /* For situations where we need to change a setting at runtime that is only @@ -1831,9 +1832,18 @@ static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled) struct sja1105_general_params_entry *general_params; struct sja1105_private *priv = ds->priv; struct sja1105_table *table; + struct sja1105_rule *rule; u16 tpid, tpid2; int rc; + list_for_each_entry(rule, &priv->flow_block.rules, list) { + if (rule->type == SJA1105_RULE_VL) { + dev_err(ds->dev, + "Cannot change VLAN filtering state while VL rules are active\n"); + return -EBUSY; + } + } + if (enabled) { /* Enable VLAN filtering. */ tpid = ETH_P_8021Q; diff --git a/drivers/net/dsa/sja1105/sja1105_vl.c b/drivers/net/dsa/sja1105/sja1105_vl.c new file mode 100644 index 000000000000..c226779b8275 --- /dev/null +++ b/drivers/net/dsa/sja1105/sja1105_vl.c @@ -0,0 +1,302 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright 2020, NXP Semiconductors + */ +#include +#include "sja1105.h" + +/* The switch flow classification core implements TTEthernet, which 'thinks' in + * terms of Virtual Links (VL), a concept borrowed from ARINC 664 part 7. + * However it also has one other operating mode (VLLUPFORMAT=0) where it acts + * somewhat closer to a pre-standard implementation of IEEE 802.1Qci + * (Per-Stream Filtering and Policing), which is what the driver is going to be + * implementing. + * + * VL Lookup + * Key = {DMAC && VLANID +---------+ Key = { (DMAC[47:16] & VLMASK == + * && VLAN PCP | | VLMARKER) + * && INGRESS PORT} +---------+ (both fixed) + * (exact match, | && DMAC[15:0] == VLID + * all specified in rule) | (specified in rule) + * v && INGRESS PORT } + * ------------ + * 0 (PSFP) / \ 1 (ARINC664) + * +-----------/ VLLUPFORMAT \----------+ + * | \ (fixed) / | + * | \ / | + * 0 (forwarding) v ------------ | + * ------------ | + * / \ 1 (QoS classification) | + * +---/ ISCRITICAL \-----------+ | + * | \ (per rule) / | | + * | \ / VLID taken from VLID taken from + * v ------------ index of rule contents of rule + * select that matched that matched + * DESTPORTS | | + * | +---------+--------+ + * | | + * | v + * | VL Forwarding + * | (indexed by VLID) + * | +---------+ + * | +--------------| | + * | | select TYPE +---------+ + * | v + * | 0 (rate ------------ 1 (time + * | constrained) / \ triggered) + * | +------/ TYPE \------------+ + * | | \ (per VLID) / | + * | v \ / v + * | VL Policing ------------ VL Policing + * | (indexed by VLID) (indexed by VLID) + * | +---------+ +---------+ + * | | TYPE=0 | | TYPE=1 | + * | +---------+ +---------+ + * | select SHARINDX select SHARINDX to + * | to rate-limit re-enter VL Forwarding + * | groups of VL's with new VLID for egress + * | to same quota | + * | | | + * | select MAXLEN -> exceed => drop select MAXLEN -> exceed => drop + * | | | + * | v v + * | VL Forwarding VL Forwarding + * | (indexed by SHARINDX) (indexed by SHARINDX) + * | +---------+ +---------+ + * | | TYPE=0 | | TYPE=1 | + * | +---------+ +---------+ + * | select PRIORITY, select PRIORITY, + * | PARTITION, DESTPORTS PARTITION, DESTPORTS + * | | | + * | v v + * | VL Policing VL Policing + * | (indexed by SHARINDX) (indexed by SHARINDX) + * | +---------+ +---------+ + * | | TYPE=0 | | TYPE=1 | + * | +---------+ +---------+ + * | | | + * | v | + * | select BAG, -> exceed => drop | + * | JITTER v + * | | ---------------------------------------------- + * | | / Reception Window is open for this VL \ + * | | / (the Schedule Table executes an entry i \ + * | | / M <= i < N, for which these conditions hold): \ no + * | | +----/ \-+ + * | | |yes \ WINST[M] == 1 && WINSTINDEX[M] == VLID / | + * | | | \ WINEND[N] == 1 && WINSTINDEX[N] == VLID / | + * | | | \ / | + * | | | \ (the VL window has opened and not yet closed)/ | + * | | | ---------------------------------------------- | + * | | v v + * | | dispatch to DESTPORTS when the Schedule Table drop + * | | executes an entry i with TXEN == 1 && VLINDEX == i + * v v + * dispatch immediately to DESTPORTS + * + * The per-port classification key is always composed of {DMAC, VID, PCP} and + * is non-maskable. This 'looks like' the NULL stream identification function + * from IEEE 802.1CB clause 6, except for the extra VLAN PCP. When the switch + * ports operate as VLAN-unaware, we do allow the user to not specify the VLAN + * ID and PCP, and then the port-based defaults will be used. + * + * In TTEthernet, routing is something that needs to be done manually for each + * Virtual Link. So the flow action must always include one of: + * a. 'redirect', 'trap' or 'drop': select the egress port list + * Additionally, the following actions may be applied on a Virtual Link, + * turning it into 'critical' traffic: + * b. 'police': turn it into a rate-constrained VL, with bandwidth limitation + * given by the maximum frame length, bandwidth allocation gap (BAG) and + * maximum jitter. + * c. 'gate': turn it into a time-triggered VL, which can be only be received + * and forwarded according to a given schedule. + */ + +static bool sja1105_vl_key_lower(struct sja1105_vl_lookup_entry *a, + struct sja1105_vl_lookup_entry *b) +{ + if (a->macaddr < b->macaddr) + return true; + if (a->macaddr > b->macaddr) + return false; + if (a->vlanid < b->vlanid) + return true; + if (a->vlanid > b->vlanid) + return false; + if (a->port < b->port) + return true; + if (a->port > b->port) + return false; + if (a->vlanprior < b->vlanprior) + return true; + if (a->vlanprior > b->vlanprior) + return false; + /* Keys are equal */ + return false; +} + +static int sja1105_init_virtual_links(struct sja1105_private *priv, + struct netlink_ext_ack *extack) +{ + struct sja1105_vl_lookup_entry *vl_lookup; + struct sja1105_table *table; + struct sja1105_rule *rule; + int num_virtual_links = 0; + int i, j, k; + + /* Figure out the dimensioning of the problem */ + list_for_each_entry(rule, &priv->flow_block.rules, list) { + if (rule->type != SJA1105_RULE_VL) + continue; + /* Each VL lookup entry matches on a single ingress port */ + num_virtual_links += hweight_long(rule->port_mask); + } + + if (num_virtual_links > SJA1105_MAX_VL_LOOKUP_COUNT) { + NL_SET_ERR_MSG_MOD(extack, "Not enough VL entries available"); + return -ENOSPC; + } + + /* Discard previous VL Lookup Table */ + table = &priv->static_config.tables[BLK_IDX_VL_LOOKUP]; + if (table->entry_count) { + kfree(table->entries); + table->entry_count = 0; + } + + /* Nothing to do */ + if (!num_virtual_links) + return 0; + + /* Pre-allocate space in the static config tables */ + + /* VL Lookup Table */ + table = &priv->static_config.tables[BLK_IDX_VL_LOOKUP]; + table->entries = kcalloc(num_virtual_links, + table->ops->unpacked_entry_size, + GFP_KERNEL); + if (!table->entries) + return -ENOMEM; + table->entry_count = num_virtual_links; + vl_lookup = table->entries; + + k = 0; + + list_for_each_entry(rule, &priv->flow_block.rules, list) { + unsigned long port; + + if (rule->type != SJA1105_RULE_VL) + continue; + + for_each_set_bit(port, &rule->port_mask, SJA1105_NUM_PORTS) { + vl_lookup[k].format = SJA1105_VL_FORMAT_PSFP; + vl_lookup[k].port = port; + vl_lookup[k].macaddr = rule->key.vl.dmac; + if (rule->key.type == SJA1105_KEY_VLAN_AWARE_VL) { + vl_lookup[k].vlanid = rule->key.vl.vid; + vl_lookup[k].vlanprior = rule->key.vl.pcp; + } else { + u16 vid = dsa_8021q_rx_vid(priv->ds, port); + + vl_lookup[k].vlanid = vid; + vl_lookup[k].vlanprior = 0; + } + /* For critical VLs, the DESTPORTS mask is taken from + * the VL Forwarding Table, so no point in putting it + * in the VL Lookup Table + */ + if (rule->vl.type == SJA1105_VL_NONCRITICAL) + vl_lookup[k].destports = rule->vl.destports; + else + vl_lookup[k].iscritical = true; + k++; + } + } + + /* UM10944.pdf chapter 4.2.3 VL Lookup table: + * "the entries in the VL Lookup table must be sorted in ascending + * order (i.e. the smallest value must be loaded first) according to + * the following sort order: MACADDR, VLANID, PORT, VLANPRIOR." + */ + for (i = 0; i < num_virtual_links; i++) { + struct sja1105_vl_lookup_entry *a = &vl_lookup[i]; + + for (j = i + 1; j < num_virtual_links; j++) { + struct sja1105_vl_lookup_entry *b = &vl_lookup[j]; + + if (sja1105_vl_key_lower(b, a)) { + struct sja1105_vl_lookup_entry tmp = *a; + + *a = *b; + *b = tmp; + } + } + } + + return 0; +} + +int sja1105_vl_redirect(struct sja1105_private *priv, int port, + struct netlink_ext_ack *extack, unsigned long cookie, + struct sja1105_key *key, unsigned long destports, + bool append) +{ + struct sja1105_rule *rule = sja1105_rule_find(priv, cookie); + int rc; + + if (dsa_port_is_vlan_filtering(dsa_to_port(priv->ds, port)) && + key->type != SJA1105_KEY_VLAN_AWARE_VL) { + NL_SET_ERR_MSG_MOD(extack, + "Can only redirect based on {DMAC, VID, PCP}"); + return -EOPNOTSUPP; + } else if (key->type != SJA1105_KEY_VLAN_UNAWARE_VL) { + NL_SET_ERR_MSG_MOD(extack, + "Can only redirect based on DMAC"); + return -EOPNOTSUPP; + } + + if (!rule) { + rule = kzalloc(sizeof(*rule), GFP_KERNEL); + if (!rule) + return -ENOMEM; + + rule->cookie = cookie; + rule->type = SJA1105_RULE_VL; + rule->key = *key; + list_add(&rule->list, &priv->flow_block.rules); + } + + rule->port_mask |= BIT(port); + if (append) + rule->vl.destports |= destports; + else + rule->vl.destports = destports; + + rc = sja1105_init_virtual_links(priv, extack); + if (rc) { + rule->port_mask &= ~BIT(port); + if (!rule->port_mask) { + list_del(&rule->list); + kfree(rule); + } + } + + return rc; +} + +int sja1105_vl_delete(struct sja1105_private *priv, int port, + struct sja1105_rule *rule, struct netlink_ext_ack *extack) +{ + int rc; + + rule->port_mask &= ~BIT(port); + if (!rule->port_mask) { + list_del(&rule->list); + kfree(rule); + } + + rc = sja1105_init_virtual_links(priv, extack); + if (rc) + return rc; + + return sja1105_static_config_reload(priv, SJA1105_VIRTUAL_LINKS); +} diff --git a/drivers/net/dsa/sja1105/sja1105_vl.h b/drivers/net/dsa/sja1105/sja1105_vl.h new file mode 100644 index 000000000000..08ee5557b463 --- /dev/null +++ b/drivers/net/dsa/sja1105/sja1105_vl.h @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright 2020, NXP Semiconductors + */ +#ifndef _SJA1105_VL_H +#define _SJA1105_VL_H + +#if IS_ENABLED(CONFIG_NET_DSA_SJA1105_VL) + +int sja1105_vl_redirect(struct sja1105_private *priv, int port, + struct netlink_ext_ack *extack, unsigned long cookie, + struct sja1105_key *key, unsigned long destports, + bool append); + +int sja1105_vl_delete(struct sja1105_private *priv, int port, + struct sja1105_rule *rule, + struct netlink_ext_ack *extack); + +#else + +static inline int sja1105_vl_redirect(struct sja1105_private *priv, int port, + struct netlink_ext_ack *extack, + unsigned long cookie, + struct sja1105_key *key, + unsigned long destports, + bool append) +{ + NL_SET_ERR_MSG_MOD(extack, "Virtual Links not compiled in"); + return -EOPNOTSUPP; +} + +static inline int sja1105_vl_delete(struct sja1105_private *priv, + int port, struct sja1105_rule *rule, + struct netlink_ext_ack *extack) +{ + NL_SET_ERR_MSG_MOD(extack, "Virtual Links not compiled in"); + return -EOPNOTSUPP; +} + +#endif /* IS_ENABLED(CONFIG_NET_DSA_SJA1105_VL) */ + +#endif /* _SJA1105_VL_H */ diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c index 7944c243903c..0761edc43c0a 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c @@ -1001,12 +1001,29 @@ static int enetc_psfp_parse_clsflower(struct enetc_ndev_priv *priv, flow_rule_match_eth_addrs(rule, &match); + if (!is_zero_ether_addr(match.mask->dst) && + !is_zero_ether_addr(match.mask->src)) { + NL_SET_ERR_MSG_MOD(extack, + "Cannot match on both source and destination MAC"); + goto free_filter; + } + if (!is_zero_ether_addr(match.mask->dst)) { + if (!is_broadcast_ether_addr(match.mask->dst)) { + NL_SET_ERR_MSG_MOD(extack, + "Masked matching on destination MAC not supported"); + goto free_filter; + } ether_addr_copy(filter->sid.dst_mac, match.key->dst); filter->sid.filtertype = STREAMID_TYPE_NULL; } if (!is_zero_ether_addr(match.mask->src)) { + if (!is_broadcast_ether_addr(match.mask->src)) { + NL_SET_ERR_MSG_MOD(extack, + "Masked matching on source MAC not supported"); + goto free_filter; + } ether_addr_copy(filter->sid.src_mac, match.key->src); filter->sid.filtertype = STREAMID_TYPE_SMAC; } From patchwork Tue Apr 28 01:39:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 220421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4733C83003 for ; Tue, 28 Apr 2020 01:39:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 89922206D9 for ; Tue, 28 Apr 2020 01:39:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="IXppPYdJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726377AbgD1Bjj (ORCPT ); Mon, 27 Apr 2020 21:39:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726259AbgD1Bji (ORCPT ); Mon, 27 Apr 2020 21:39:38 -0400 Received: from mail-wr1-x443.google.com (mail-wr1-x443.google.com [IPv6:2a00:1450:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA77DC03C1A8 for ; Mon, 27 Apr 2020 18:39:37 -0700 (PDT) Received: by mail-wr1-x443.google.com with SMTP id x17so21985901wrt.5 for ; Mon, 27 Apr 2020 18:39:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=icgreR/9dHfUzitwBF08mg8XLYK1BPE357ZvmIZQqNQ=; b=IXppPYdJwmrdFSKzaUcb5mNRnunb/xp3u95AipnZkZTbAyBHc7tcvSrvk1grOzA5uq lo8J2VEP3qFh4tlLJO2EOiuARAMdBEeN0C7H8c6UDFfN0nhO+z/mSqtPQAlGKaZX/6V4 9akIZulD1xRZGtsxizJMvetbx8jfPZ8Cf9jmCTPMKl/vxpkxwpgGF6YyQTThKrRGMrdV pzopKglEEdxf5azpdWVNeEe2ow2+B1p1zRMOHHSrNjLEeaTI7PbZpbPelFoS3lNVA2Y6 yYRA+g3MnKFOSbMZE287SyboJDu1whFxbYAO7m2Yjk2fCGbET5+p79Ydnbi5uprcul7T OenA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=icgreR/9dHfUzitwBF08mg8XLYK1BPE357ZvmIZQqNQ=; b=VIAc/NXqKLaf6hqd3+1e4beoj2Wcg3ogZLc7loF1Jve33J6TYitdWpyHVVYvXR9BNl yyA8vcJSV/RKHxBUzRV8PjUQqqKgWfDTtLRQULJ4P+ukoFeV1PH1rnUB/yOpw1q0txKl 4ZE1u64mp148FvHnDAZNy2zndMTvSPrm5XeNb8EA2NJ6ChVanFbjRKAK8gl+9RkTg9nO cMZ9dh/ittgxy4oDWNJ3YI/+RUmOySRUz/Zj/Lfanweqt/qtFcY1mR1Mo15zCi+2hsbk fhQSXv2mwVaJDmpQwFd68U9i/M0PzAnnYJJVCGJVpkkSfg/rGMZlrzJn4RgjTt/10wsh x1ow== X-Gm-Message-State: AGi0PubXY1MjiASi/JKU/r7GP88qnWi1I+3EGzjC70j2M/ePFDYXZDnS O/WNQxwW0AimSO1JDBl9YSH6rgRX X-Google-Smtp-Source: APiQypKc9keGkc/eQsnydjgaqnrZPosyu1bsABuwkphLb37UumrmvyDCc4699bPCvhll+0sqUSWO6w== X-Received: by 2002:adf:cd8c:: with SMTP id q12mr32384744wrj.419.1588037975701; Mon, 27 Apr 2020 18:39:35 -0700 (PDT) Received: from localhost.localdomain ([86.121.118.29]) by smtp.gmail.com with ESMTPSA id u188sm1235348wmg.37.2020.04.27.18.39.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Apr 2020 18:39:35 -0700 (PDT) From: Vladimir Oltean To: netdev@vger.kernel.org Cc: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, vinicius.gomes@intel.com, po.liu@nxp.com, xiaoliang.yang@nxp.com, mingkai.hu@nxp.com, christian.herber@nxp.com, claudiu.manoil@nxp.com, vladimir.oltean@nxp.com, alexandru.marginean@nxp.com, vlad@buslov.dev, jiri@mellanox.com, idosch@mellanox.com, kuba@kernel.org Subject: [RFC PATCH 5/5] net: dsa: sja1105: implement tc-gate using time-triggered virtual links Date: Tue, 28 Apr 2020 04:39:06 +0300 Message-Id: <20200428013906.19904-6-olteanv@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200428013906.19904-1-olteanv@gmail.com> References: <20200428013906.19904-1-olteanv@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean Restrict the TTEthernet hardware support on this switch to operate as closely as possible to IEEE 802.1Qci as possible. This means that it can perform PTP-time-based ingress admission control on streams identified by {DMAC, VID, PCP}, which is useful when trying to ensure the determinism of traffic scheduled via IEEE 802.1Qbv. The oddity comes from the fact that in hardware (and in TTEthernet at large), virtual links always need a full-blown action, including not only the type of policing, but also the list of destination ports. So in practice, a single tc-gate action will result in all packets getting dropped. Additional actions (either "trap" or "redirect") need to be specified in the same filter rule such that the conforming packets are actually forwarded somewhere. Apart from the VL Lookup, Policing and Forwarding tables which need to be programmed for each flow (virtual link), the Schedule engine also needs to be told to open/close the admission gates for each individual virtual link. A fairly accurate (and detailed) description of how that works is already present in sja1105_tas.c, since it is already used to trigger the egress gates for the tc-taprio offload (IEEE 802.1Qbv). Key point here, we remember that the schedule engine supports 8 "subschedules" (execution threads that iterate through the global schedule in parallel, and that no 2 hardware threads must execute a schedule entry at the same time). For tc-taprio, each egress port used one of these 8 subschedules, leaving a total of 4 subschedules unused. In principle we could have allocated 1 subschedule for the tc-gate offload of each ingress port, but actually the schedules of all virtual links installed on each ingress port would have needed to be merged together, before they could have been programmed to hardware. So simplify our life and just merge the entire tc-gate configuration, for all virtual links on all ingress ports, into a single subschedule. Be sure to check that against the usual hardware scheduling conflicts, and program it to hardware alongside any tc-taprio subschedule that may be present. The following scenarios were tested: 1. Quantitative testing: tc qdisc add dev swp2 clsact tc filter add dev swp2 ingress flower skip_sw \ dst_mac 42:be:24:9b:76:20 \ action gate index 1 base-time 0 \ sched-entry OPEN 1200 -1 -1 \ sched-entry CLOSE 1200 -1 -1 \ action trap ping 192.168.1.2 -f PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data. ............................. --- 192.168.1.2 ping statistics --- 948 packets transmitted, 467 received, 50.7384% packet loss, time 9671ms 2. Qualitative testing (with a phase-aligned schedule - the clocks are synchronized by ptp4l, not shown here): Receiver (sja1105): tc qdisc add dev swp2 clsact now=$(phc_ctl /dev/ptp1 get | awk '/clock time is/ {print $5}') && \ sec=$(echo $now | awk -F. '{print $1}') && \ base_time="$(((sec + 2) * 1000000000))" && \ echo "base time ${base_time}" tc filter add dev swp2 ingress flower skip_sw \ dst_mac 42:be:24:9b:76:20 \ action gate base-time ${base_time} \ sched-entry OPEN 60000 -1 -1 \ sched-entry CLOSE 40000 -1 -1 \ action trap Sender (enetc): now=$(phc_ctl /dev/ptp0 get | awk '/clock time is/ {print $5}') && \ sec=$(echo $now | awk -F. '{print $1}') && \ base_time="$(((sec + 2) * 1000000000))" && \ echo "base time ${base_time}" tc qdisc add dev eno0 parent root taprio \ num_tc 8 \ map 0 1 2 3 4 5 6 7 \ queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \ base-time ${base_time} \ sched-entry S 01 50000 \ sched-entry S 00 50000 \ flags 2 ping -A 192.168.1.1 PING 192.168.1.1 (192.168.1.1): 56 data bytes ... ^C --- 192.168.1.1 ping statistics --- 1425 packets transmitted, 1424 packets received, 0% packet loss round-trip min/avg/max = 0.322/0.361/0.990 ms And just for comparison, with the tc-taprio schedule deleted: ping -A 192.168.1.1 PING 192.168.1.1 (192.168.1.1): 56 data bytes ... ^C --- 192.168.1.1 ping statistics --- 33 packets transmitted, 19 packets received, 42% packet loss round-trip min/avg/max = 0.336/0.464/0.597 ms Signed-off-by: Vladimir Oltean --- drivers/net/dsa/sja1105/sja1105.h | 13 +- drivers/net/dsa/sja1105/sja1105_flower.c | 49 +- drivers/net/dsa/sja1105/sja1105_main.c | 1 + drivers/net/dsa/sja1105/sja1105_ptp.h | 13 + drivers/net/dsa/sja1105/sja1105_spi.c | 2 + .../net/dsa/sja1105/sja1105_static_config.h | 2 + drivers/net/dsa/sja1105/sja1105_tas.c | 127 ++++- drivers/net/dsa/sja1105/sja1105_tas.h | 31 ++ drivers/net/dsa/sja1105/sja1105_vl.c | 495 ++++++++++++++++++ drivers/net/dsa/sja1105/sja1105_vl.h | 31 ++ 10 files changed, 747 insertions(+), 17 deletions(-) diff --git a/drivers/net/dsa/sja1105/sja1105.h b/drivers/net/dsa/sja1105/sja1105.h index 1756000f6936..8df2a5c53b02 100644 --- a/drivers/net/dsa/sja1105/sja1105.h +++ b/drivers/net/dsa/sja1105/sja1105.h @@ -36,6 +36,7 @@ struct sja1105_regs { u64 status; u64 port_control; u64 rgu; + u64 vl_status; u64 config; u64 sgmii; u64 rmii_pll1; @@ -156,8 +157,16 @@ struct sja1105_rule { /* SJA1105_RULE_VL */ struct { - unsigned long destports; enum sja1105_vl_type type; + unsigned long destports; + int sharindx; + int maxlen; + int ipv; + u64 base_time; + u64 cycle_time; + int num_entries; + struct action_gate_entry *entries; + struct flow_stats stats; } vl; }; }; @@ -304,6 +313,8 @@ int sja1105_cls_flower_del(struct dsa_switch *ds, int port, struct flow_cls_offload *cls, bool ingress); int sja1105_cls_flower_add(struct dsa_switch *ds, int port, struct flow_cls_offload *cls, bool ingress); +int sja1105_cls_flower_stats(struct dsa_switch *ds, int port, + struct flow_cls_offload *cls, bool ingress); void sja1105_flower_setup(struct dsa_switch *ds); void sja1105_flower_teardown(struct dsa_switch *ds); struct sja1105_rule *sja1105_rule_find(struct sja1105_private *priv, diff --git a/drivers/net/dsa/sja1105/sja1105_flower.c b/drivers/net/dsa/sja1105/sja1105_flower.c index 48d7cd8e5bef..e96dfde7aee4 100644 --- a/drivers/net/dsa/sja1105/sja1105_flower.c +++ b/drivers/net/dsa/sja1105/sja1105_flower.c @@ -310,6 +310,7 @@ int sja1105_cls_flower_add(struct dsa_switch *ds, int port, const struct flow_action_entry *act; unsigned long cookie = cls->cookie; struct sja1105_key key; + bool gate_rule = false; bool vl_rule = false; int rc, i; @@ -366,6 +367,21 @@ int sja1105_cls_flower_add(struct dsa_switch *ds, int port, if (rc) goto out; break; + case FLOW_ACTION_GATE: + gate_rule = true; + vl_rule = true; + + rc = sja1105_vl_gate(priv, port, extack, cookie, + &key, act->gate.index, + act->gate.prio, + act->gate.basetime, + act->gate.cycletime, + act->gate.cycletimeext, + act->gate.num_entries, + act->gate.entries); + if (rc) + goto out; + break; default: NL_SET_ERR_MSG_MOD(extack, "Action not supported"); @@ -374,8 +390,18 @@ int sja1105_cls_flower_add(struct dsa_switch *ds, int port, } } - if (vl_rule && !rc) + if (vl_rule && !rc) { + /* Delay scheduling configuration until DESTPORTS has been + * populated by all other actions. + */ + if (gate_rule) { + rc = sja1105_init_scheduling(priv); + if (rc) + goto out; + } + rc = sja1105_static_config_reload(priv, SJA1105_VIRTUAL_LINKS); + } out: return rc; @@ -421,6 +447,27 @@ int sja1105_cls_flower_del(struct dsa_switch *ds, int port, return sja1105_static_config_reload(priv, SJA1105_BEST_EFFORT_POLICING); } +int sja1105_cls_flower_stats(struct dsa_switch *ds, int port, + struct flow_cls_offload *cls, bool ingress) +{ + struct sja1105_private *priv = ds->priv; + struct sja1105_rule *rule = sja1105_rule_find(priv, cls->cookie); + int rc; + + if (!rule) + return 0; + + if (rule->type != SJA1105_RULE_VL) + return 0; + + rc = sja1105_vl_stats(priv, port, rule, &cls->stats, + cls->common.extack); + if (rc) + return rc; + + return 0; +} + void sja1105_flower_setup(struct dsa_switch *ds) { struct sja1105_private *priv = ds->priv; diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c index 8bb104ee73d5..666e54565df0 100644 --- a/drivers/net/dsa/sja1105/sja1105_main.c +++ b/drivers/net/dsa/sja1105/sja1105_main.c @@ -2369,6 +2369,7 @@ static const struct dsa_switch_ops sja1105_switch_ops = { .port_policer_del = sja1105_port_policer_del, .cls_flower_add = sja1105_cls_flower_add, .cls_flower_del = sja1105_cls_flower_del, + .cls_flower_stats = sja1105_cls_flower_stats, }; static int sja1105_check_device_id(struct sja1105_private *priv) diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.h b/drivers/net/dsa/sja1105/sja1105_ptp.h index 43480b24f1f0..6408d1158f2d 100644 --- a/drivers/net/dsa/sja1105/sja1105_ptp.h +++ b/drivers/net/dsa/sja1105/sja1105_ptp.h @@ -48,6 +48,19 @@ static inline s64 future_base_time(s64 base_time, s64 cycle_time, s64 now) return base_time + n * cycle_time; } +/* This is not a preprocessor macro because the "ns" argument may or may not be + * s64 at caller side. This ensures it is properly type-cast before div_s64. + */ +static inline s64 ns_to_sja1105_delta(s64 ns) +{ + return div_s64(ns, 200); +} + +static inline s64 sja1105_delta_to_ns(s64 delta) +{ + return delta * 200; +} + struct sja1105_ptp_cmd { u64 startptpcp; /* start toggling PTP_CLK pin */ u64 stopptpcp; /* stop toggling PTP_CLK pin */ diff --git a/drivers/net/dsa/sja1105/sja1105_spi.c b/drivers/net/dsa/sja1105/sja1105_spi.c index 43f14a5c2718..0be75c49e6c3 100644 --- a/drivers/net/dsa/sja1105/sja1105_spi.c +++ b/drivers/net/dsa/sja1105/sja1105_spi.c @@ -439,6 +439,7 @@ static struct sja1105_regs sja1105et_regs = { .prod_id = 0x100BC3, .status = 0x1, .port_control = 0x11, + .vl_status = 0x10000, .config = 0x020000, .rgu = 0x100440, /* UM10944.pdf, Table 86, ACU Register overview */ @@ -472,6 +473,7 @@ static struct sja1105_regs sja1105pqrs_regs = { .prod_id = 0x100BC3, .status = 0x1, .port_control = 0x12, + .vl_status = 0x10000, .config = 0x020000, .rgu = 0x100440, /* UM10944.pdf, Table 86, ACU Register overview */ diff --git a/drivers/net/dsa/sja1105/sja1105_static_config.h b/drivers/net/dsa/sja1105/sja1105_static_config.h index 1a8fcbbb57b6..b569e3de3590 100644 --- a/drivers/net/dsa/sja1105/sja1105_static_config.h +++ b/drivers/net/dsa/sja1105/sja1105_static_config.h @@ -302,6 +302,8 @@ struct sja1105_vl_lookup_entry { u64 vlid; }; }; + /* Not part of hardware structure */ + unsigned long flow_cookie; }; struct sja1105_vl_policing_entry { diff --git a/drivers/net/dsa/sja1105/sja1105_tas.c b/drivers/net/dsa/sja1105/sja1105_tas.c index 77e547b4cd89..3aa1a8b5f766 100644 --- a/drivers/net/dsa/sja1105/sja1105_tas.c +++ b/drivers/net/dsa/sja1105/sja1105_tas.c @@ -7,7 +7,6 @@ #define SJA1105_TAS_CLKSRC_STANDALONE 1 #define SJA1105_TAS_CLKSRC_AS6802 2 #define SJA1105_TAS_CLKSRC_PTP 3 -#define SJA1105_TAS_MAX_DELTA BIT(19) #define SJA1105_GATE_MASK GENMASK_ULL(SJA1105_NUM_TC - 1, 0) #define work_to_sja1105_tas(d) \ @@ -15,22 +14,10 @@ #define tas_to_sja1105(d) \ container_of((d), struct sja1105_private, tas_data) -/* This is not a preprocessor macro because the "ns" argument may or may not be - * s64 at caller side. This ensures it is properly type-cast before div_s64. - */ -static s64 ns_to_sja1105_delta(s64 ns) -{ - return div_s64(ns, 200); -} - -static s64 sja1105_delta_to_ns(s64 delta) -{ - return delta * 200; -} - static int sja1105_tas_set_runtime_params(struct sja1105_private *priv) { struct sja1105_tas_data *tas_data = &priv->tas_data; + struct sja1105_gating_config *gating_cfg = &tas_data->gating_cfg; struct dsa_switch *ds = priv->ds; s64 earliest_base_time = S64_MAX; s64 latest_base_time = 0; @@ -59,6 +46,19 @@ static int sja1105_tas_set_runtime_params(struct sja1105_private *priv) } } + if (!list_empty(&gating_cfg->entries)) { + tas_data->enabled = true; + + if (max_cycle_time < gating_cfg->cycle_time) + max_cycle_time = gating_cfg->cycle_time; + if (latest_base_time < gating_cfg->base_time) + latest_base_time = gating_cfg->base_time; + if (earliest_base_time > gating_cfg->base_time) { + earliest_base_time = gating_cfg->base_time; + its_cycle_time = gating_cfg->cycle_time; + } + } + if (!tas_data->enabled) return 0; @@ -155,13 +155,14 @@ static int sja1105_tas_set_runtime_params(struct sja1105_private *priv) * their "subschedule end index" (subscheind) equal to the last valid * subschedule's end index (in this case 5). */ -static int sja1105_init_scheduling(struct sja1105_private *priv) +int sja1105_init_scheduling(struct sja1105_private *priv) { struct sja1105_schedule_entry_points_entry *schedule_entry_points; struct sja1105_schedule_entry_points_params_entry *schedule_entry_points_params; struct sja1105_schedule_params_entry *schedule_params; struct sja1105_tas_data *tas_data = &priv->tas_data; + struct sja1105_gating_config *gating_cfg = &tas_data->gating_cfg; struct sja1105_schedule_entry *schedule; struct sja1105_table *table; int schedule_start_idx; @@ -213,6 +214,11 @@ static int sja1105_init_scheduling(struct sja1105_private *priv) } } + if (!list_empty(&gating_cfg->entries)) { + num_entries += gating_cfg->num_entries; + num_cycles++; + } + /* Nothing to do */ if (!num_cycles) return 0; @@ -312,6 +318,42 @@ static int sja1105_init_scheduling(struct sja1105_private *priv) cycle++; } + if (!list_empty(&gating_cfg->entries)) { + struct sja1105_gate_entry *e; + + /* Relative base time */ + s64 rbt; + + schedule_start_idx = k; + schedule_end_idx = k + gating_cfg->num_entries - 1; + rbt = future_base_time(gating_cfg->base_time, + gating_cfg->cycle_time, + tas_data->earliest_base_time); + rbt -= tas_data->earliest_base_time; + entry_point_delta = ns_to_sja1105_delta(rbt) + 1; + + schedule_entry_points[cycle].subschindx = cycle; + schedule_entry_points[cycle].delta = entry_point_delta; + schedule_entry_points[cycle].address = schedule_start_idx; + + for (i = cycle; i < 8; i++) + schedule_params->subscheind[i] = schedule_end_idx; + + list_for_each_entry(e, &gating_cfg->entries, list) { + schedule[k].delta = ns_to_sja1105_delta(e->interval); + schedule[k].destports = e->rule->vl.destports; + schedule[k].setvalid = true; + schedule[k].txen = true; + schedule[k].vlindex = e->rule->vl.sharindx; + schedule[k].winstindex = e->rule->vl.sharindx; + if (e->gate_state) /* Gate open */ + schedule[k].winst = true; + else /* Gate closed */ + schedule[k].winend = true; + k++; + } + } + return 0; } @@ -415,6 +457,54 @@ sja1105_tas_check_conflicts(struct sja1105_private *priv, int port, return false; } +/* Check the tc-taprio configuration on @port for conflicts with the tc-gate + * global subschedule. If @port is -1, check it against all ports. + * To reuse the sja1105_tas_check_conflicts logic without refactoring it, + * convert the gating configuration to a dummy tc-taprio offload structure. + */ +bool sja1105_gating_check_conflicts(struct sja1105_private *priv, int port, + struct netlink_ext_ack *extack) +{ + struct sja1105_gating_config *gating_cfg = &priv->tas_data.gating_cfg; + size_t num_entries = gating_cfg->num_entries; + struct tc_taprio_qopt_offload *dummy; + struct sja1105_gate_entry *e; + bool conflict; + int i = 0; + + if (list_empty(&gating_cfg->entries)) + return false; + + dummy = kzalloc(sizeof(struct tc_taprio_sched_entry) * num_entries + + sizeof(struct tc_taprio_qopt_offload), GFP_KERNEL); + if (!dummy) { + NL_SET_ERR_MSG_MOD(extack, "Failed to allocate memory"); + return true; + } + + dummy->num_entries = num_entries; + dummy->base_time = gating_cfg->base_time; + dummy->cycle_time = gating_cfg->cycle_time; + + list_for_each_entry(e, &gating_cfg->entries, list) + dummy->entries[i++].interval = e->interval; + + if (port != -1) { + conflict = sja1105_tas_check_conflicts(priv, port, dummy); + } else { + for (port = 0; port < SJA1105_NUM_PORTS; port++) { + conflict = sja1105_tas_check_conflicts(priv, port, + dummy); + if (conflict) + break; + } + } + + kfree(dummy); + + return conflict; +} + int sja1105_setup_tc_taprio(struct dsa_switch *ds, int port, struct tc_taprio_qopt_offload *admin) { @@ -473,6 +563,11 @@ int sja1105_setup_tc_taprio(struct dsa_switch *ds, int port, return -ERANGE; } + if (sja1105_gating_check_conflicts(priv, port, NULL)) { + dev_err(ds->dev, "Conflict with tc-gate schedule\n"); + return -ERANGE; + } + tas_data->offload[port] = taprio_offload_get(admin); rc = sja1105_init_scheduling(priv); @@ -779,6 +874,8 @@ void sja1105_tas_setup(struct dsa_switch *ds) INIT_WORK(&tas_data->tas_work, sja1105_tas_state_machine); tas_data->state = SJA1105_TAS_STATE_DISABLED; tas_data->last_op = SJA1105_PTP_NONE; + + INIT_LIST_HEAD(&tas_data->gating_cfg.entries); } void sja1105_tas_teardown(struct dsa_switch *ds) diff --git a/drivers/net/dsa/sja1105/sja1105_tas.h b/drivers/net/dsa/sja1105/sja1105_tas.h index b226c3dfd5b1..2dc1856d403d 100644 --- a/drivers/net/dsa/sja1105/sja1105_tas.h +++ b/drivers/net/dsa/sja1105/sja1105_tas.h @@ -6,6 +6,8 @@ #include +#define SJA1105_TAS_MAX_DELTA BIT(18) + #if IS_ENABLED(CONFIG_NET_DSA_SJA1105_TAS) enum sja1105_tas_state { @@ -20,8 +22,23 @@ enum sja1105_ptp_op { SJA1105_PTP_ADJUSTFREQ, }; +struct sja1105_gate_entry { + struct list_head list; + struct sja1105_rule *rule; + s64 interval; + u8 gate_state; +}; + +struct sja1105_gating_config { + u64 cycle_time; + s64 base_time; + int num_entries; + struct list_head entries; +}; + struct sja1105_tas_data { struct tc_taprio_qopt_offload *offload[SJA1105_NUM_PORTS]; + struct sja1105_gating_config gating_cfg; enum sja1105_tas_state state; enum sja1105_ptp_op last_op; struct work_struct tas_work; @@ -31,6 +48,8 @@ struct sja1105_tas_data { bool enabled; }; +struct sja1105_private; + int sja1105_setup_tc_taprio(struct dsa_switch *ds, int port, struct tc_taprio_qopt_offload *admin); @@ -42,6 +61,11 @@ void sja1105_tas_clockstep(struct dsa_switch *ds); void sja1105_tas_adjfreq(struct dsa_switch *ds); +bool sja1105_gating_check_conflicts(struct sja1105_private *priv, int port, + struct netlink_ext_ack *extack); + +int sja1105_init_scheduling(struct sja1105_private *priv); + #else /* C doesn't allow empty structures, bah! */ @@ -63,6 +87,13 @@ static inline void sja1105_tas_clockstep(struct dsa_switch *ds) { } static inline void sja1105_tas_adjfreq(struct dsa_switch *ds) { } +static inline bool +sja1105_gating_check_conflicts(struct dsa_switch *ds, int port, + struct netlink_ext_ack *extack) +{ + return true; +} + #endif /* IS_ENABLED(CONFIG_NET_DSA_SJA1105_TAS) */ #endif /* _SJA1105_TAS_H */ diff --git a/drivers/net/dsa/sja1105/sja1105_vl.c b/drivers/net/dsa/sja1105/sja1105_vl.c index c226779b8275..4974cfba1328 100644 --- a/drivers/net/dsa/sja1105/sja1105_vl.c +++ b/drivers/net/dsa/sja1105/sja1105_vl.c @@ -1,9 +1,13 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright 2020, NXP Semiconductors */ +#include #include #include "sja1105.h" +#define SJA1105_VL_FRAME_MEMORY 100 +#define SJA1105_SIZE_VL_STATUS 8 + /* The switch flow classification core implements TTEthernet, which 'thinks' in * terms of Virtual Links (VL), a concept borrowed from ARINC 664 part 7. * However it also has one other operating mode (VLLUPFORMAT=0) where it acts @@ -137,18 +141,33 @@ static bool sja1105_vl_key_lower(struct sja1105_vl_lookup_entry *a, static int sja1105_init_virtual_links(struct sja1105_private *priv, struct netlink_ext_ack *extack) { + struct sja1105_l2_forwarding_params_entry *l2_fwd_params; + struct sja1105_vl_forwarding_params_entry *vl_fwd_params; + struct sja1105_vl_policing_entry *vl_policing; + struct sja1105_vl_forwarding_entry *vl_fwd; struct sja1105_vl_lookup_entry *vl_lookup; + bool have_critical_virtual_links = false; struct sja1105_table *table; struct sja1105_rule *rule; int num_virtual_links = 0; + int max_sharindx = 0; int i, j, k; + table = &priv->static_config.tables[BLK_IDX_L2_FORWARDING_PARAMS]; + l2_fwd_params = table->entries; + l2_fwd_params->part_spc[0] = SJA1105_MAX_FRAME_MEMORY; + /* Figure out the dimensioning of the problem */ list_for_each_entry(rule, &priv->flow_block.rules, list) { if (rule->type != SJA1105_RULE_VL) continue; /* Each VL lookup entry matches on a single ingress port */ num_virtual_links += hweight_long(rule->port_mask); + + if (rule->vl.type != SJA1105_VL_NONCRITICAL) + have_critical_virtual_links = true; + if (max_sharindx < rule->vl.sharindx) + max_sharindx = rule->vl.sharindx; } if (num_virtual_links > SJA1105_MAX_VL_LOOKUP_COUNT) { @@ -156,6 +175,13 @@ static int sja1105_init_virtual_links(struct sja1105_private *priv, return -ENOSPC; } + if (max_sharindx + 1 > SJA1105_MAX_VL_LOOKUP_COUNT) { + NL_SET_ERR_MSG_MOD(extack, "Policer index out of range"); + return -ENOSPC; + } + + max_sharindx = max_t(int, num_virtual_links, max_sharindx) + 1; + /* Discard previous VL Lookup Table */ table = &priv->static_config.tables[BLK_IDX_VL_LOOKUP]; if (table->entry_count) { @@ -163,6 +189,27 @@ static int sja1105_init_virtual_links(struct sja1105_private *priv, table->entry_count = 0; } + /* Discard previous VL Policing Table */ + table = &priv->static_config.tables[BLK_IDX_VL_POLICING]; + if (table->entry_count) { + kfree(table->entries); + table->entry_count = 0; + } + + /* Discard previous VL Forwarding Table */ + table = &priv->static_config.tables[BLK_IDX_VL_FORWARDING]; + if (table->entry_count) { + kfree(table->entries); + table->entry_count = 0; + } + + /* Discard previous VL Forwarding Parameters Table */ + table = &priv->static_config.tables[BLK_IDX_VL_FORWARDING_PARAMS]; + if (table->entry_count) { + kfree(table->entries); + table->entry_count = 0; + } + /* Nothing to do */ if (!num_virtual_links) return 0; @@ -208,6 +255,7 @@ static int sja1105_init_virtual_links(struct sja1105_private *priv, vl_lookup[k].destports = rule->vl.destports; else vl_lookup[k].iscritical = true; + vl_lookup[k].flow_cookie = rule->cookie; k++; } } @@ -232,6 +280,68 @@ static int sja1105_init_virtual_links(struct sja1105_private *priv, } } + if (!have_critical_virtual_links) + return 0; + + /* VL Policing Table */ + table = &priv->static_config.tables[BLK_IDX_VL_POLICING]; + table->entries = kcalloc(max_sharindx, table->ops->unpacked_entry_size, + GFP_KERNEL); + if (!table->entries) + return -ENOMEM; + table->entry_count = max_sharindx; + vl_policing = table->entries; + + /* VL Forwarding Table */ + table = &priv->static_config.tables[BLK_IDX_VL_FORWARDING]; + table->entries = kcalloc(max_sharindx, table->ops->unpacked_entry_size, + GFP_KERNEL); + if (!table->entries) + return -ENOMEM; + table->entry_count = max_sharindx; + vl_fwd = table->entries; + + /* VL Forwarding Parameters Table */ + table = &priv->static_config.tables[BLK_IDX_VL_FORWARDING_PARAMS]; + table->entries = kcalloc(1, table->ops->unpacked_entry_size, + GFP_KERNEL); + if (!table->entries) + return -ENOMEM; + table->entry_count = 1; + vl_fwd_params = table->entries; + + /* Reserve some frame buffer memory for the critical-traffic virtual + * links (this needs to be done). At the moment, hardcode the value + * at 100 blocks of 128 bytes of memory each. This leaves 829 blocks + * remaining for best-effort traffic. TODO: figure out a more flexible + * way to perform the frame buffer partitioning. + */ + l2_fwd_params->part_spc[0] = SJA1105_MAX_FRAME_MEMORY - + SJA1105_VL_FRAME_MEMORY; + vl_fwd_params->partspc[0] = SJA1105_VL_FRAME_MEMORY; + + for (i = 0; i < num_virtual_links; i++) { + unsigned long cookie = vl_lookup[i].flow_cookie; + struct sja1105_rule *rule = sja1105_rule_find(priv, cookie); + + if (rule->vl.type == SJA1105_VL_NONCRITICAL) + continue; + if (rule->vl.type == SJA1105_VL_TIME_TRIGGERED) { + int sharindx = rule->vl.sharindx; + + vl_policing[i].type = 1; + vl_policing[i].sharindx = sharindx; + vl_policing[i].maxlen = rule->vl.maxlen; + vl_policing[sharindx].type = 1; + + vl_fwd[i].type = 1; + vl_fwd[sharindx].type = 1; + vl_fwd[sharindx].priority = rule->vl.ipv; + vl_fwd[sharindx].partition = 0; + vl_fwd[sharindx].destports = rule->vl.destports; + } + } + return 0; } @@ -300,3 +410,388 @@ int sja1105_vl_delete(struct sja1105_private *priv, int port, return sja1105_static_config_reload(priv, SJA1105_VIRTUAL_LINKS); } + +/* Insert into the global gate list, sorted by gate action time. */ +static int sja1105_insert_gate_entry(struct sja1105_gating_config *gating_cfg, + struct sja1105_rule *rule, + u8 gate_state, s64 entry_time, + struct netlink_ext_ack *extack) +{ + struct sja1105_gate_entry *e; + int rc; + + e = kzalloc(sizeof(*e), GFP_KERNEL); + if (!e) + return -ENOMEM; + + e->rule = rule; + e->gate_state = gate_state; + e->interval = entry_time; + + if (list_empty(&gating_cfg->entries)) { + list_add(&e->list, &gating_cfg->entries); + } else { + struct sja1105_gate_entry *p; + + list_for_each_entry(p, &gating_cfg->entries, list) { + if (p->interval == e->interval) { + NL_SET_ERR_MSG_MOD(extack, + "Gate conflict"); + rc = -EBUSY; + goto err; + } + + if (e->interval < p->interval) + break; + } + list_add(&e->list, p->list.prev); + } + + gating_cfg->num_entries++; + + return 0; +err: + kfree(e); + return rc; +} + +static void +sja1105_gating_cfg_time_to_interval(struct sja1105_gating_config *gating_cfg, + u64 cycle_time) +{ + struct sja1105_gate_entry *last_e; + struct sja1105_gate_entry *e; + struct list_head *prev; + u32 prev_time = 0; + + list_for_each_entry(e, &gating_cfg->entries, list) { + struct sja1105_gate_entry *p; + + prev = e->list.prev; + + if (prev == &gating_cfg->entries) + continue; + + p = list_entry(prev, struct sja1105_gate_entry, list); + prev_time = e->interval; + p->interval = e->interval - p->interval; + } + last_e = list_last_entry(&gating_cfg->entries, + struct sja1105_gate_entry, list); + if (last_e->list.prev != &gating_cfg->entries) + last_e->interval = cycle_time - last_e->interval; +} + +static void sja1105_free_gating_config(struct sja1105_gating_config *gating_cfg) +{ + struct sja1105_gate_entry *e, *n; + + list_for_each_entry_safe(e, n, &gating_cfg->entries, list) { + list_del(&e->list); + kfree(e); + } +} + +static int sja1105_compose_gating_subschedule(struct sja1105_private *priv, + struct netlink_ext_ack *extack) +{ + struct sja1105_gating_config *gating_cfg = &priv->tas_data.gating_cfg; + struct sja1105_rule *rule; + s64 max_cycle_time = 0; + s64 its_base_time = 0; + int i, rc = 0; + + list_for_each_entry(rule, &priv->flow_block.rules, list) { + if (rule->type != SJA1105_RULE_VL) + continue; + if (rule->vl.type != SJA1105_VL_TIME_TRIGGERED) + continue; + + if (max_cycle_time < rule->vl.cycle_time) { + max_cycle_time = rule->vl.cycle_time; + its_base_time = rule->vl.base_time; + } + } + + if (!max_cycle_time) + return 0; + + dev_dbg(priv->ds->dev, "max_cycle_time %lld its_base_time %lld\n", + max_cycle_time, its_base_time); + + sja1105_free_gating_config(gating_cfg); + + gating_cfg->base_time = its_base_time; + gating_cfg->cycle_time = max_cycle_time; + gating_cfg->num_entries = 0; + + list_for_each_entry(rule, &priv->flow_block.rules, list) { + s64 time; + s64 rbt; + + if (rule->type != SJA1105_RULE_VL) + continue; + if (rule->vl.type != SJA1105_VL_TIME_TRIGGERED) + continue; + + /* Calculate the difference between this gating schedule's + * base time, and the base time of the gating schedule with the + * longest cycle time. We call it the relative base time (rbt). + */ + rbt = future_base_time(rule->vl.base_time, rule->vl.cycle_time, + its_base_time); + rbt -= its_base_time; + + time = rbt; + + for (i = 0; i < rule->vl.num_entries; i++) { + u8 gate_state = rule->vl.entries[i].gate_state; + s64 entry_time = time; + + while (entry_time < max_cycle_time) { + rc = sja1105_insert_gate_entry(gating_cfg, rule, + gate_state, + entry_time, + extack); + if (rc) + goto err; + + entry_time += rule->vl.cycle_time; + } + time += rule->vl.entries[i].interval; + } + } + + sja1105_gating_cfg_time_to_interval(gating_cfg, max_cycle_time); + + return 0; +err: + sja1105_free_gating_config(gating_cfg); + return rc; +} + +int sja1105_vl_gate(struct sja1105_private *priv, int port, + struct netlink_ext_ack *extack, unsigned long cookie, + struct sja1105_key *key, u32 index, s32 prio, + u64 base_time, u64 cycle_time, u64 cycle_time_ext, + u32 num_entries, struct action_gate_entry *entries) +{ + struct sja1105_rule *rule = sja1105_rule_find(priv, cookie); + int maxlen = -1; + int ipv = -1; + int i, rc; + s32 rem; + + if (cycle_time_ext) { + NL_SET_ERR_MSG_MOD(extack, + "Cycle time extension not supported"); + return -EOPNOTSUPP; + } + + div_s64_rem(base_time, sja1105_delta_to_ns(1), &rem); + if (rem) { + NL_SET_ERR_MSG_MOD(extack, + "Base time must be multiple of 200 ns"); + return -ERANGE; + } + + div_s64_rem(cycle_time, sja1105_delta_to_ns(1), &rem); + if (rem) { + NL_SET_ERR_MSG_MOD(extack, + "Cycle time must be multiple of 200 ns"); + return -ERANGE; + } + + if (dsa_port_is_vlan_filtering(dsa_to_port(priv->ds, port)) && + key->type != SJA1105_KEY_VLAN_AWARE_VL) { + NL_SET_ERR_MSG_MOD(extack, + "Can only gate based on {DMAC, VID, PCP}"); + return -EOPNOTSUPP; + } else if (key->type != SJA1105_KEY_VLAN_UNAWARE_VL) { + NL_SET_ERR_MSG_MOD(extack, + "Can only gate based on DMAC"); + return -EOPNOTSUPP; + } + + if (!rule) { + rule = kzalloc(sizeof(*rule), GFP_KERNEL); + if (!rule) + return -ENOMEM; + + list_add(&rule->list, &priv->flow_block.rules); + rule->cookie = cookie; + rule->type = SJA1105_RULE_VL; + rule->key = *key; + rule->vl.type = SJA1105_VL_TIME_TRIGGERED; + rule->vl.sharindx = index; + rule->vl.base_time = base_time; + rule->vl.cycle_time = cycle_time; + rule->vl.num_entries = num_entries; + rule->vl.entries = kcalloc(num_entries, + sizeof(struct action_gate_entry), + GFP_KERNEL); + if (!rule->vl.entries) { + rc = -ENOMEM; + goto out; + } + + for (i = 0; i < num_entries; i++) { + div_s64_rem(entries[i].interval, + sja1105_delta_to_ns(1), &rem); + if (rem) { + NL_SET_ERR_MSG_MOD(extack, + "Interval must be multiple of 200 ns"); + rc = -ERANGE; + goto out; + } + + if (!entries[i].interval) { + NL_SET_ERR_MSG_MOD(extack, + "Interval cannot be zero"); + rc = -ERANGE; + goto out; + } + + if (ns_to_sja1105_delta(entries[i].interval) > + SJA1105_TAS_MAX_DELTA) { + NL_SET_ERR_MSG_MOD(extack, + "Maximum interval is 52 ms"); + rc = -ERANGE; + goto out; + } + + if (maxlen == -1) { + maxlen = entries[i].maxoctets; + } else if (maxlen != entries[i].maxoctets) { + NL_SET_ERR_MSG_MOD(extack, + "Only support a single MAXLEN per VL"); + rc = -EOPNOTSUPP; + goto out; + } + + if (ipv == -1) { + ipv = entries[i].ipv; + } else if (ipv != entries[i].ipv) { + NL_SET_ERR_MSG_MOD(extack, + "Only support a single IPV per VL"); + rc = -EOPNOTSUPP; + goto out; + } + + rule->vl.entries[i] = entries[i]; + } + + if (maxlen == -1) + maxlen = VLAN_ETH_FRAME_LEN + ETH_FCS_LEN; + if (ipv == -1) { + if (key->type == SJA1105_KEY_VLAN_AWARE_VL) + ipv = key->vl.pcp; + else + ipv = 0; + } + + rule->vl.maxlen = maxlen; + rule->vl.ipv = ipv; + } + + rule->port_mask |= BIT(port); + + rc = sja1105_compose_gating_subschedule(priv, extack); + if (rc) + goto out; + + rc = sja1105_init_virtual_links(priv, extack); + if (rc) + goto out; + + if (sja1105_gating_check_conflicts(priv, -1, extack)) { + NL_SET_ERR_MSG_MOD(extack, "Conflict with tc-taprio schedule"); + rc = -ERANGE; + goto out; + } + +out: + if (rc) { + rule->port_mask &= ~BIT(port); + if (!rule->port_mask) { + list_del(&rule->list); + kfree(rule->vl.entries); + kfree(rule); + } + } + + return rc; +} + +static int sja1105_find_vlid(struct sja1105_private *priv, int port, + struct sja1105_key *key) +{ + struct sja1105_vl_lookup_entry *vl_lookup; + struct sja1105_table *table; + int i; + + if (WARN_ON(key->type != SJA1105_KEY_VLAN_AWARE_VL && + key->type != SJA1105_KEY_VLAN_UNAWARE_VL)) + return -1; + + table = &priv->static_config.tables[BLK_IDX_VL_LOOKUP]; + vl_lookup = table->entries; + + for (i = 0; i < table->entry_count; i++) { + if (key->type == SJA1105_KEY_VLAN_AWARE_VL) { + if (vl_lookup[i].port == port && + vl_lookup[i].macaddr == key->vl.dmac && + vl_lookup[i].vlanid == key->vl.vid && + vl_lookup[i].vlanprior == key->vl.pcp) + return i; + } else { + if (vl_lookup[i].port == port && + vl_lookup[i].macaddr == key->vl.dmac) + return i; + } + } + + return -1; +} + +int sja1105_vl_stats(struct sja1105_private *priv, int port, + struct sja1105_rule *rule, struct flow_stats *stats, + struct netlink_ext_ack *extack) +{ + const struct sja1105_regs *regs = priv->info->regs; + u8 buf[SJA1105_SIZE_VL_STATUS] = {0}; + u64 unreleased; + u64 timingerr; + u64 lengtherr; + int vlid, rc; + u64 pkts; + + if (rule->vl.type != SJA1105_VL_TIME_TRIGGERED) + return 0; + + vlid = sja1105_find_vlid(priv, port, &rule->key); + if (vlid < 0) + return 0; + + rc = sja1105_xfer_buf(priv, SPI_READ, regs->vl_status + 2 * vlid, buf, + SJA1105_SIZE_VL_STATUS); + if (rc) { + NL_SET_ERR_MSG_MOD(extack, "SPI access failed"); + return rc; + } + + sja1105_unpack(buf, &timingerr, 31, 16, SJA1105_SIZE_VL_STATUS); + sja1105_unpack(buf, &unreleased, 15, 0, SJA1105_SIZE_VL_STATUS); + sja1105_unpack(buf, &lengtherr, 47, 32, SJA1105_SIZE_VL_STATUS); + + pkts = timingerr + unreleased + lengtherr; + + flow_stats_update(stats, 0, pkts - rule->vl.stats.pkts, + jiffies - rule->vl.stats.lastused, + FLOW_ACTION_HW_STATS_IMMEDIATE); + + rule->vl.stats.pkts = pkts; + rule->vl.stats.lastused = jiffies; + + return 0; +} diff --git a/drivers/net/dsa/sja1105/sja1105_vl.h b/drivers/net/dsa/sja1105/sja1105_vl.h index 08ee5557b463..323fa0535af7 100644 --- a/drivers/net/dsa/sja1105/sja1105_vl.h +++ b/drivers/net/dsa/sja1105/sja1105_vl.h @@ -15,6 +15,16 @@ int sja1105_vl_delete(struct sja1105_private *priv, int port, struct sja1105_rule *rule, struct netlink_ext_ack *extack); +int sja1105_vl_gate(struct sja1105_private *priv, int port, + struct netlink_ext_ack *extack, unsigned long cookie, + struct sja1105_key *key, u32 index, s32 prio, + u64 base_time, u64 cycle_time, u64 cycle_time_ext, + u32 num_entries, struct action_gate_entry *entries); + +int sja1105_vl_stats(struct sja1105_private *priv, int port, + struct sja1105_rule *rule, struct flow_stats *stats, + struct netlink_ext_ack *extack); + #else static inline int sja1105_vl_redirect(struct sja1105_private *priv, int port, @@ -36,6 +46,27 @@ static inline int sja1105_vl_delete(struct sja1105_private *priv, return -EOPNOTSUPP; } +static inline int sja1105_vl_gate(struct sja1105_private *priv, int port, + struct netlink_ext_ack *extack, + unsigned long cookie, + struct sja1105_key *key, u32 index, s32 prio, + u64 base_time, u64 cycle_time, + u64 cycle_time_ext, u32 num_entries, + struct action_gate_entry *entries) +{ + NL_SET_ERR_MSG_MOD(extack, "Virtual Links not compiled in"); + return -EOPNOTSUPP; +} + +static inline int sja1105_vl_stats(struct sja1105_private *priv, int port, + struct sja1105_rule *rule, + struct flow_stats *stats, + struct netlink_ext_ack *extack) +{ + NL_SET_ERR_MSG_MOD(extack, "Virtual Links not compiled in"); + return -EOPNOTSUPP; +} + #endif /* IS_ENABLED(CONFIG_NET_DSA_SJA1105_VL) */ #endif /* _SJA1105_VL_H */