From patchwork Fri May 20 18:11:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 574668 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26B30C433EF for ; Fri, 20 May 2022 18:12:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237996AbiETSM0 (ORCPT ); Fri, 20 May 2022 14:12:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352435AbiETSMY (ORCPT ); Fri, 20 May 2022 14:12:24 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BE6A18C07B; Fri, 20 May 2022 11:12:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CC082617A3; Fri, 20 May 2022 18:12:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5F982C385A9; Fri, 20 May 2022 18:12:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070341; bh=jjinRlKFxifScyEw3OjetYoQ52Kq5EtAMT5xBGYx7aA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LiwTZmXLHfwOJAT5WaYmr6GlPo9pA3MmpoMzn3RfKNDZ1dheuVogGEZk6YW1WsfQ+ /ZpZ8kVybqRn38Xa/PVeqJRKONhBKe9PYiA780J4NkP/wEQXk3muwCiwyiAO5JOcOs uwBn+PY8XEe1sgLslmvW1A2SYwMFQbwtcOkmwbrDrpOZV79USdRgl/2m1HWTb7GOke 1UMzCaF/Y8DrcwvbjderK/HMWJEp6NLK15dW6KnZqkBG+zLgyTjFHyn+xaVLaKHeuU nIVv0zd270SKdJvmqMkN66EtcxLiOTsgYiTazRjSf+qL+WHPeBlBGYFSi3nb/JIax5 I9J4Ns8iPi/DQ== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 01/16] arm64: dts: mediatek: mt7986: introduce ethernet nodes Date: Fri, 20 May 2022 20:11:24 +0200 Message-Id: <6b41d8c5f3c88328947a9d0850ac01f1f98e7da5.1653069056.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Introduce ethernet nodes in mt7986 bindings in order to enable mt7986a/mt7986b ethernet support. Co-developed-by: Sam Shih Signed-off-by: Sam Shih Signed-off-by: Lorenzo Bianconi --- arch/arm64/boot/dts/mediatek/mt7986a-rfb.dts | 74 ++++++++++++++++++++ arch/arm64/boot/dts/mediatek/mt7986a.dtsi | 39 +++++++++++ arch/arm64/boot/dts/mediatek/mt7986b-rfb.dts | 70 ++++++++++++++++++ 3 files changed, 183 insertions(+) diff --git a/arch/arm64/boot/dts/mediatek/mt7986a-rfb.dts b/arch/arm64/boot/dts/mediatek/mt7986a-rfb.dts index 21e420829572..882277a52b69 100644 --- a/arch/arm64/boot/dts/mediatek/mt7986a-rfb.dts +++ b/arch/arm64/boot/dts/mediatek/mt7986a-rfb.dts @@ -25,6 +25,80 @@ memory@40000000 { }; }; +ð { + status = "okay"; + + gmac0: mac@0 { + compatible = "mediatek,eth-mac"; + reg = <0>; + phy-mode = "2500base-x"; + + fixed-link { + speed = <2500>; + full-duplex; + pause; + }; + }; + + mdio: mdio-bus { + #address-cells = <1>; + #size-cells = <0>; + }; +}; + +&mdio { + switch: switch@0 { + compatible = "mediatek,mt7531"; + reg = <31>; + reset-gpios = <&pio 5 0>; + }; +}; + +&switch { + ports { + #address-cells = <1>; + #size-cells = <0>; + + port@0 { + reg = <0>; + label = "lan0"; + }; + + port@1 { + reg = <1>; + label = "lan1"; + }; + + port@2 { + reg = <2>; + label = "lan2"; + }; + + port@3 { + reg = <3>; + label = "lan3"; + }; + + port@4 { + reg = <4>; + label = "lan4"; + }; + + port@6 { + reg = <6>; + label = "cpu"; + ethernet = <&gmac0>; + phy-mode = "2500base-x"; + + fixed-link { + speed = <2500>; + full-duplex; + pause; + }; + }; + }; +}; + &uart0 { status = "okay"; }; diff --git a/arch/arm64/boot/dts/mediatek/mt7986a.dtsi b/arch/arm64/boot/dts/mediatek/mt7986a.dtsi index 694acf8f5b70..d2636a0ed152 100644 --- a/arch/arm64/boot/dts/mediatek/mt7986a.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt7986a.dtsi @@ -222,6 +222,45 @@ ethsys: syscon@15000000 { #reset-cells = <1>; }; + eth: ethernet@15100000 { + compatible = "mediatek,mt7986-eth"; + reg = <0 0x15100000 0 0x80000>; + interrupts = , + , + , + ; + clocks = <ðsys CLK_ETH_FE_EN>, + <ðsys CLK_ETH_GP2_EN>, + <ðsys CLK_ETH_GP1_EN>, + <ðsys CLK_ETH_WOCPU1_EN>, + <ðsys CLK_ETH_WOCPU0_EN>, + <&sgmiisys0 CLK_SGMII0_TX250M_EN>, + <&sgmiisys0 CLK_SGMII0_RX250M_EN>, + <&sgmiisys0 CLK_SGMII0_CDR_REF>, + <&sgmiisys0 CLK_SGMII0_CDR_FB>, + <&sgmiisys1 CLK_SGMII1_TX250M_EN>, + <&sgmiisys1 CLK_SGMII1_RX250M_EN>, + <&sgmiisys1 CLK_SGMII1_CDR_REF>, + <&sgmiisys1 CLK_SGMII1_CDR_FB>, + <&topckgen CLK_TOP_NETSYS_SEL>, + <&topckgen CLK_TOP_NETSYS_500M_SEL>; + clock-names = "fe", "gp2", "gp1", "wocpu1", "wocpu0", + "sgmii_tx250m", "sgmii_rx250m", + "sgmii_cdr_ref", "sgmii_cdr_fb", + "sgmii2_tx250m", "sgmii2_rx250m", + "sgmii2_cdr_ref", "sgmii2_cdr_fb", + "netsys0", "netsys1"; + assigned-clocks = <&topckgen CLK_TOP_NETSYS_2X_SEL>, + <&topckgen CLK_TOP_SGM_325M_SEL>; + assigned-clock-parents = <&apmixedsys CLK_APMIXED_NET2PLL>, + <&apmixedsys CLK_APMIXED_SGMPLL>; + mediatek,ethsys = <ðsys>; + mediatek,sgmiisys = <&sgmiisys0>, <&sgmiisys1>; + #reset-cells = <1>; + #address-cells = <1>; + #size-cells = <0>; + status = "disabled"; + }; }; }; diff --git a/arch/arm64/boot/dts/mediatek/mt7986b-rfb.dts b/arch/arm64/boot/dts/mediatek/mt7986b-rfb.dts index d73467ea3641..0f49d5764ff3 100644 --- a/arch/arm64/boot/dts/mediatek/mt7986b-rfb.dts +++ b/arch/arm64/boot/dts/mediatek/mt7986b-rfb.dts @@ -28,3 +28,73 @@ memory@40000000 { &uart0 { status = "okay"; }; + +ð { + status = "okay"; + + gmac0: mac@0 { + compatible = "mediatek,eth-mac"; + reg = <0>; + phy-mode = "2500base-x"; + + fixed-link { + speed = <2500>; + full-duplex; + pause; + }; + }; + + mdio: mdio-bus { + #address-cells = <1>; + #size-cells = <0>; + + switch@0 { + compatible = "mediatek,mt7531"; + reg = <31>; + reset-gpios = <&pio 5 0>; + + ports { + #address-cells = <1>; + #size-cells = <0>; + + port@0 { + reg = <0>; + label = "lan0"; + }; + + port@1 { + reg = <1>; + label = "lan1"; + }; + + port@2 { + reg = <2>; + label = "lan2"; + }; + + port@3 { + reg = <3>; + label = "lan3"; + }; + + port@4 { + reg = <4>; + label = "lan4"; + }; + + port@6 { + reg = <6>; + label = "cpu"; + ethernet = <&gmac0>; + phy-mode = "2500base-x"; + + fixed-link { + speed = <2500>; + full-duplex; + pause; + }; + }; + }; + }; + }; +}; From patchwork Fri May 20 18:11:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 575146 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5042C433EF for ; Fri, 20 May 2022 18:12:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352434AbiETSMa (ORCPT ); Fri, 20 May 2022 14:12:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352435AbiETSM3 (ORCPT ); Fri, 20 May 2022 14:12:29 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46DDC18C064; Fri, 20 May 2022 11:12:27 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DF66BB82D90; Fri, 20 May 2022 18:12:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AA46BC34118; Fri, 20 May 2022 18:12:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070344; bh=H60Wh+bnJHjx+ZsgY6FYl2p/uE81sYQWMHIPyF7iYtI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LxIif3A2b8O7l6j75jjy8dWKZDajpGQjSoCGgLcjL05kqz88YR9PJz4APlVCTNWQ+ DpwZQlwM4XOgXDwL7x3ADH+zZyLp0nHc5d/3PJIkIDIlbuKVtjq8GJFHUmQle1ZFZF ISwmshCUHfiXBKBFyJtwecBPvHDbTbqhQCDFzS3qaMpQVPLAUWFwbUQV/9KX4gtmtV ODBxgr8/6UgInoyS2i4mf/aUP2YOJM5QsV5PXH45qighIDd5HcRoiR4QQNzQn/SEHi O8ECfAdsBJPsBlqDkpPrl2hE7NkDqNMqYB6JfKYnqWXvbg9p73eD+tJIyrM8ttk57A a+/vqjUPL2Eig== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 02/16] dt-bindings: net: mediatek,net: add mt7986-eth binding Date: Fri, 20 May 2022 20:11:25 +0200 Message-Id: X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Introduce dts bindings for mt7986 soc in mediatek,net.yaml. Reviewed-by: Rob Herring Signed-off-by: Lorenzo Bianconi --- .../devicetree/bindings/net/mediatek,net.yaml | 141 +++++++++++++++++- 1 file changed, 139 insertions(+), 2 deletions(-) diff --git a/Documentation/devicetree/bindings/net/mediatek,net.yaml b/Documentation/devicetree/bindings/net/mediatek,net.yaml index 43cc4024ef98..699164dd1295 100644 --- a/Documentation/devicetree/bindings/net/mediatek,net.yaml +++ b/Documentation/devicetree/bindings/net/mediatek,net.yaml @@ -21,6 +21,7 @@ properties: - mediatek,mt7623-eth - mediatek,mt7622-eth - mediatek,mt7629-eth + - mediatek,mt7986-eth - ralink,rt5350-eth reg: @@ -28,7 +29,7 @@ properties: interrupts: minItems: 3 - maxItems: 3 + maxItems: 4 power-domains: maxItems: 1 @@ -88,6 +89,9 @@ allOf: - mediatek,mt7623-eth then: properties: + interrupts: + maxItems: 3 + clocks: minItems: 4 maxItems: 4 @@ -112,6 +116,9 @@ allOf: const: mediatek,mt7622-eth then: properties: + interrupts: + maxItems: 3 + clocks: minItems: 11 maxItems: 11 @@ -155,6 +162,9 @@ allOf: const: mediatek,mt7629-eth then: properties: + interrupts: + maxItems: 3 + clocks: minItems: 17 maxItems: 17 @@ -189,6 +199,42 @@ allOf: minItems: 2 maxItems: 2 + - if: + properties: + compatible: + contains: + const: mediatek,mt7986-eth + then: + properties: + interrupts: + minItems: 4 + + clocks: + minItems: 15 + maxItems: 15 + + clock-names: + items: + - const: fe + - const: gp2 + - const: gp1 + - const: wocpu1 + - const: wocpu0 + - const: sgmii_tx250m + - const: sgmii_rx250m + - const: sgmii_cdr_ref + - const: sgmii_cdr_fb + - const: sgmii2_tx250m + - const: sgmii2_rx250m + - const: sgmii2_cdr_ref + - const: sgmii2_cdr_fb + - const: netsys0 + - const: netsys1 + + mediatek,sgmiisys: + minItems: 2 + maxItems: 2 + patternProperties: "^mac@[0-1]$": type: object @@ -219,7 +265,6 @@ required: - interrupts - clocks - clock-names - - power-domains - mediatek,ethsys unevaluatedProperties: false @@ -295,3 +340,95 @@ examples: }; }; }; + + - | + #include + #include + #include + + soc { + #address-cells = <2>; + #size-cells = <2>; + + eth: ethernet@15100000 { + #define CLK_ETH_FE_EN 0 + #define CLK_ETH_WOCPU1_EN 3 + #define CLK_ETH_WOCPU0_EN 4 + #define CLK_TOP_NETSYS_SEL 43 + #define CLK_TOP_NETSYS_500M_SEL 44 + #define CLK_TOP_NETSYS_2X_SEL 46 + #define CLK_TOP_SGM_325M_SEL 47 + #define CLK_APMIXED_NET2PLL 1 + #define CLK_APMIXED_SGMPLL 3 + + compatible = "mediatek,mt7986-eth"; + reg = <0 0x15100000 0 0x80000>; + interrupts = , + , + , + ; + clocks = <ðsys CLK_ETH_FE_EN>, + <ðsys CLK_ETH_GP2_EN>, + <ðsys CLK_ETH_GP1_EN>, + <ðsys CLK_ETH_WOCPU1_EN>, + <ðsys CLK_ETH_WOCPU0_EN>, + <&sgmiisys0 CLK_SGMII_TX250M_EN>, + <&sgmiisys0 CLK_SGMII_RX250M_EN>, + <&sgmiisys0 CLK_SGMII_CDR_REF>, + <&sgmiisys0 CLK_SGMII_CDR_FB>, + <&sgmiisys1 CLK_SGMII_TX250M_EN>, + <&sgmiisys1 CLK_SGMII_RX250M_EN>, + <&sgmiisys1 CLK_SGMII_CDR_REF>, + <&sgmiisys1 CLK_SGMII_CDR_FB>, + <&topckgen CLK_TOP_NETSYS_SEL>, + <&topckgen CLK_TOP_NETSYS_SEL>; + clock-names = "fe", "gp2", "gp1", "wocpu1", "wocpu0", + "sgmii_tx250m", "sgmii_rx250m", + "sgmii_cdr_ref", "sgmii_cdr_fb", + "sgmii2_tx250m", "sgmii2_rx250m", + "sgmii2_cdr_ref", "sgmii2_cdr_fb", + "netsys0", "netsys1"; + mediatek,ethsys = <ðsys>; + mediatek,sgmiisys = <&sgmiisys0>, <&sgmiisys1>; + assigned-clocks = <&topckgen CLK_TOP_NETSYS_2X_SEL>, + <&topckgen CLK_TOP_SGM_325M_SEL>; + assigned-clock-parents = <&apmixedsys CLK_APMIXED_NET2PLL>, + <&apmixedsys CLK_APMIXED_SGMPLL>; + + #address-cells = <1>; + #size-cells = <0>; + + mdio: mdio-bus { + #address-cells = <1>; + #size-cells = <0>; + + phy5: ethernet-phy@0 { + compatible = "ethernet-phy-id67c9.de0a"; + phy-mode = "2500base-x"; + reset-gpios = <&pio 6 1>; + reset-deassert-us = <20000>; + reg = <5>; + }; + + phy6: ethernet-phy@1 { + compatible = "ethernet-phy-id67c9.de0a"; + phy-mode = "2500base-x"; + reg = <6>; + }; + }; + + mac0: mac@0 { + compatible = "mediatek,eth-mac"; + phy-mode = "2500base-x"; + phy-handle = <&phy5>; + reg = <0>; + }; + + mac1: mac@1 { + compatible = "mediatek,eth-mac"; + phy-mode = "2500base-x"; + phy-handle = <&phy6>; + reg = <1>; + }; + }; + }; From patchwork Fri May 20 18:11:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 574667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A56E8C433FE for ; Fri, 20 May 2022 18:12:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352441AbiETSMf (ORCPT ); Fri, 20 May 2022 14:12:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352428AbiETSMa (ORCPT ); Fri, 20 May 2022 14:12:30 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF2F1E52AC; Fri, 20 May 2022 11:12:28 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6AC3C60FF1; Fri, 20 May 2022 18:12:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F3BCEC34116; Fri, 20 May 2022 18:12:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070347; bh=0hDAKVM6YH6gIxA25jk8QajliGXLumpZOREVwF1ZjI4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DOyyfN188Vu+xRsaB+Qu+MZJHdkdz5UNIJEpR3xPOxMv+a8W8NeVNj+ftEVnMJtuQ QAw8oF7ylSQTG5wJ3ljjqjTfzMY0B8vy5h0+3qLfcoTjeAOyAIyElCwybMfMsePS8N fGI3YLQiFsc32NO6tf6VbeCPom4n1H3gjkw0F8M0ClOzJQpE64oRHOscSlyyyrpa6I 0UpUD6nIlbPsWZP//X1HJ0EEtx00LbnjHMO7oTta7ehf/DG6TgA5mbCIRlr9zCOcKQ KVjkuOz9rAsz5x+zbyc2G3JUeHDMcRhHLZNX5DilsQJyY5h0go9OSixGaNCNH58BRK LFEQ62+BRhYVw== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 03/16] net: ethernet: mtk_eth_soc: rely on GFP_KERNEL for dma_alloc_coherent whenever possible Date: Fri, 20 May 2022 20:11:26 +0200 Message-Id: <2962c1559ff67d2095acbb51d2bcd1dfd8b0abe4.1653069056.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Rely on GFP_KERNEL for dma descriptors mappings in mtk_tx_alloc(), mtk_rx_alloc() and mtk_init_fq_dma() since they are run in non-irq context. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 16f131445d8b..ccd864c968b2 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -816,7 +816,7 @@ static int mtk_init_fq_dma(struct mtk_eth *eth) eth->scratch_ring = dma_alloc_coherent(eth->dma_dev, cnt * sizeof(struct mtk_tx_dma), ð->phy_scratch_ring, - GFP_ATOMIC); + GFP_KERNEL); if (unlikely(!eth->scratch_ring)) return -ENOMEM; @@ -1591,7 +1591,7 @@ static int mtk_tx_alloc(struct mtk_eth *eth) goto no_tx_mem; ring->dma = dma_alloc_coherent(eth->dma_dev, MTK_DMA_SIZE * sz, - &ring->phys, GFP_ATOMIC); + &ring->phys, GFP_KERNEL); if (!ring->dma) goto no_tx_mem; @@ -1609,8 +1609,7 @@ static int mtk_tx_alloc(struct mtk_eth *eth) */ if (!MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) { ring->dma_pdma = dma_alloc_coherent(eth->dma_dev, MTK_DMA_SIZE * sz, - &ring->phys_pdma, - GFP_ATOMIC); + &ring->phys_pdma, GFP_KERNEL); if (!ring->dma_pdma) goto no_tx_mem; @@ -1722,7 +1721,7 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) ring->dma = dma_alloc_coherent(eth->dma_dev, rx_dma_size * sizeof(*ring->dma), - &ring->phys, GFP_ATOMIC); + &ring->phys, GFP_KERNEL); if (!ring->dma) return -ENOMEM; From patchwork Fri May 20 18:11:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 575145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3679EC4332F for ; Fri, 20 May 2022 18:12:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352443AbiETSMg (ORCPT ); Fri, 20 May 2022 14:12:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345315AbiETSMd (ORCPT ); Fri, 20 May 2022 14:12:33 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34D4D18C07B; Fri, 20 May 2022 11:12:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B3CC1617E6; Fri, 20 May 2022 18:12:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 49E5FC34114; Fri, 20 May 2022 18:12:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070351; bh=cnPFNRCLliPS483h8KdIE8wmKrAo89MkfGlikr6CLqg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aXSyOfVYcqfaWZs9UiL76lpJidlV9vpeL2g2gCKW2VBD15eYTU2Q7mh+HQrSF/f3o bvCP48VRTveMcRKsDZZcvOyQGAvFdbgwaftXE/bnelPeoVmtbOizApv3G4Wn2Ik6SK sdlQpii8FV7WVzi3V0bKKxVHUdbs+tdef7fdPM/y/7jDmNHDhMGDRgX10t/PZ6SbYq eudCFhoFdejQLzaMFK0E0HLsZU9cDKsk4GqTp8OqQYzFAu2KeGt9X1JzmkvK+jqThn DNYobol345HqUyp78jFKPInZWPSHH+wg0rhr4/gnSY4Zbp48Dmd7iCH9bwKROF5NGQ C9WdgbWgfSWrg== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 04/16] net: ethernet: mtk_eth_soc: move tx dma desc configuration in mtk_tx_set_dma_desc Date: Fri, 20 May 2022 20:11:27 +0200 Message-Id: <81fbb5f7511de730ed02e5e37e22fd46248eda03.1653069056.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Move tx dma descriptor configuration in mtk_tx_set_dma_desc routine. This is a preliminary patch to introduce mt7986 ethernet support since it relies on a different tx dma descriptor layout. Tested-by: Sam Shih Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 105 +++++++++++--------- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 11 ++ 2 files changed, 67 insertions(+), 49 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index ccd864c968b2..6e713f68273f 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -942,18 +942,51 @@ static void setup_tx_buf(struct mtk_eth *eth, struct mtk_tx_buf *tx_buf, } } +static void mtk_tx_set_dma_desc(struct net_device *dev, struct mtk_tx_dma *desc, + struct mtk_tx_dma_desc_info *info) +{ + struct mtk_mac *mac = netdev_priv(dev); + u32 data; + + WRITE_ONCE(desc->txd1, info->addr); + + data = TX_DMA_SWC | TX_DMA_PLEN0(info->size); + if (info->last) + data |= TX_DMA_LS0; + WRITE_ONCE(desc->txd3, data); + + data = (mac->id + 1) << TX_DMA_FPORT_SHIFT; /* forward port */ + if (info->first) { + if (info->gso) + data |= TX_DMA_TSO; + /* tx checksum offload */ + if (info->csum) + data |= TX_DMA_CHKSUM; + /* vlan header offload */ + if (info->vlan) + data |= TX_DMA_INS_VLAN | info->vlan_tci; + } + WRITE_ONCE(desc->txd4, data); +} + static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, int tx_num, struct mtk_tx_ring *ring, bool gso) { + struct mtk_tx_dma_desc_info txd_info = { + .size = skb_headlen(skb), + .gso = gso, + .csum = skb->ip_summed == CHECKSUM_PARTIAL, + .vlan = skb_vlan_tag_present(skb), + .vlan_tci = skb_vlan_tag_get(skb), + .first = true, + .last = !skb_is_nonlinear(skb), + }; struct mtk_mac *mac = netdev_priv(dev); struct mtk_eth *eth = mac->hw; struct mtk_tx_dma *itxd, *txd; struct mtk_tx_dma *itxd_pdma, *txd_pdma; struct mtk_tx_buf *itx_buf, *tx_buf; - dma_addr_t mapped_addr; - unsigned int nr_frags; int i, n_desc = 1; - u32 txd4 = 0, fport; int k = 0; itxd = ring->next_free; @@ -961,49 +994,32 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, if (itxd == ring->last_free) return -ENOMEM; - /* set the forward port */ - fport = (mac->id + 1) << TX_DMA_FPORT_SHIFT; - txd4 |= fport; - itx_buf = mtk_desc_to_tx_buf(ring, itxd); memset(itx_buf, 0, sizeof(*itx_buf)); - if (gso) - txd4 |= TX_DMA_TSO; - - /* TX Checksum offload */ - if (skb->ip_summed == CHECKSUM_PARTIAL) - txd4 |= TX_DMA_CHKSUM; - - /* VLAN header offload */ - if (skb_vlan_tag_present(skb)) - txd4 |= TX_DMA_INS_VLAN | skb_vlan_tag_get(skb); - - mapped_addr = dma_map_single(eth->dma_dev, skb->data, - skb_headlen(skb), DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(eth->dma_dev, mapped_addr))) + txd_info.addr = dma_map_single(eth->dma_dev, skb->data, txd_info.size, + DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(eth->dma_dev, txd_info.addr))) return -ENOMEM; - WRITE_ONCE(itxd->txd1, mapped_addr); + mtk_tx_set_dma_desc(dev, itxd, &txd_info); + itx_buf->flags |= MTK_TX_FLAGS_SINGLE0; itx_buf->flags |= (!mac->id) ? MTK_TX_FLAGS_FPORT0 : MTK_TX_FLAGS_FPORT1; - setup_tx_buf(eth, itx_buf, itxd_pdma, mapped_addr, skb_headlen(skb), + setup_tx_buf(eth, itx_buf, itxd_pdma, txd_info.addr, txd_info.size, k++); /* TX SG offload */ txd = itxd; txd_pdma = qdma_to_pdma(ring, txd); - nr_frags = skb_shinfo(skb)->nr_frags; - for (i = 0; i < nr_frags; i++) { + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; unsigned int offset = 0; int frag_size = skb_frag_size(frag); while (frag_size) { - bool last_frag = false; - unsigned int frag_map_size; bool new_desc = true; if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA) || @@ -1018,23 +1034,17 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, new_desc = false; } - - frag_map_size = min(frag_size, MTK_TX_DMA_BUF_LEN); - mapped_addr = skb_frag_dma_map(eth->dma_dev, frag, offset, - frag_map_size, - DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(eth->dma_dev, mapped_addr))) + memset(&txd_info, 0, sizeof(struct mtk_tx_dma_desc_info)); + txd_info.size = min(frag_size, MTK_TX_DMA_BUF_LEN); + txd_info.last = i == skb_shinfo(skb)->nr_frags - 1 && + !(frag_size - txd_info.size); + txd_info.addr = skb_frag_dma_map(eth->dma_dev, frag, + offset, txd_info.size, + DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(eth->dma_dev, txd_info.addr))) goto err_dma; - if (i == nr_frags - 1 && - (frag_size - frag_map_size) == 0) - last_frag = true; - - WRITE_ONCE(txd->txd1, mapped_addr); - WRITE_ONCE(txd->txd3, (TX_DMA_SWC | - TX_DMA_PLEN0(frag_map_size) | - last_frag * TX_DMA_LS0)); - WRITE_ONCE(txd->txd4, fport); + mtk_tx_set_dma_desc(dev, txd, &txd_info); tx_buf = mtk_desc_to_tx_buf(ring, txd); if (new_desc) @@ -1044,20 +1054,17 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, tx_buf->flags |= (!mac->id) ? MTK_TX_FLAGS_FPORT0 : MTK_TX_FLAGS_FPORT1; - setup_tx_buf(eth, tx_buf, txd_pdma, mapped_addr, - frag_map_size, k++); + setup_tx_buf(eth, tx_buf, txd_pdma, txd_info.addr, + txd_info.size, k++); - frag_size -= frag_map_size; - offset += frag_map_size; + frag_size -= txd_info.size; + offset += txd_info.size; } } /* store skb to cleanup */ itx_buf->skb = skb; - WRITE_ONCE(itxd->txd4, txd4); - WRITE_ONCE(itxd->txd3, (TX_DMA_SWC | TX_DMA_PLEN0(skb_headlen(skb)) | - (!nr_frags * TX_DMA_LS0))); if (!MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) { if (k & 0x1) txd_pdma->txd2 |= TX_DMA_LS0; diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index c1d46eb281ea..7e0f2964ac23 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -846,6 +846,17 @@ enum mkt_eth_capabilities { MTK_MUX_U3_GMAC2_TO_QPHY | \ MTK_MUX_GMAC12_TO_GEPHY_SGMII | MTK_QDMA) +struct mtk_tx_dma_desc_info { + dma_addr_t addr; + u32 size; + u16 vlan_tci; + u8 gso:1; + u8 csum:1; + u8 vlan:1; + u8 first:1; + u8 last:1; +}; + /* struct mtk_eth_data - This is the structure holding all differences * among various plaforms * @ana_rgc3: The offset for register ANA_RGC3 related to From patchwork Fri May 20 18:11:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 574666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6F2CC433FE for ; Fri, 20 May 2022 18:12:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352435AbiETSMh (ORCPT ); Fri, 20 May 2022 14:12:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352442AbiETSMg (ORCPT ); Fri, 20 May 2022 14:12:36 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65071E52AC; Fri, 20 May 2022 11:12:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id F3C206179F; Fri, 20 May 2022 18:12:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90C79C34118; Fri, 20 May 2022 18:12:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070354; bh=6Sz9o3s/BSWowOrmzmVWTU+Ma7E8U9l+fMH1xcnlAoo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H1pTcxtY8GcZ99/KPyxnGLlJFl69k24SWSDHSYpgamvs0Ha4AdWzyfntV14qBaT6j boMyYzc3KtAvBX3NgJL11o61GaZp4GF/H4w439gKAZ53a6cBTLGht5wejTGvWU9A5O 5xr/FSdHaKo35CHezH7bh5akdB7BvxYph6OzsEP5W6lQxFgwDGvTirWFuqI6NZwORT 0nssdDr8VKMzl9MPHSpLzNxMXxfW+Ye83bnRTHTFJT6LwI7+bkFZIFbpbWh+hVpBwZ +8oqxHFtGD7v6faMYfoPbSjI5N+oivXmMK9HFrm37ZeJCqaJt/MDLCNUbcVZeIxXSE ajzRrznQ+pL0A== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 05/16] net: ethernet: mtk_eth_soc: add txd_size to mtk_soc_data Date: Fri, 20 May 2022 20:11:28 +0200 Message-Id: X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org In order to remove mtk_tx_dma size dependency, introduce txd_size in mtk_soc_data data structure. Rely on txd_size in mtk_init_fq_dma() and mtk_dma_free() routines. This is a preliminary patch to add mt7986 ethernet support. Tested-by: Sam Shih Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 47 +++++++++++++++------ drivers/net/ethernet/mediatek/mtk_eth_soc.h | 4 ++ 2 files changed, 38 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 6e713f68273f..e00c83982aa9 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -808,20 +808,20 @@ static inline bool mtk_rx_get_desc(struct mtk_rx_dma *rxd, /* the qdma core needs scratch memory to be setup */ static int mtk_init_fq_dma(struct mtk_eth *eth) { + const struct mtk_soc_data *soc = eth->soc; dma_addr_t phy_ring_tail; int cnt = MTK_DMA_SIZE; dma_addr_t dma_addr; int i; eth->scratch_ring = dma_alloc_coherent(eth->dma_dev, - cnt * sizeof(struct mtk_tx_dma), + cnt * soc->txrx.txd_size, ð->phy_scratch_ring, GFP_KERNEL); if (unlikely(!eth->scratch_ring)) return -ENOMEM; - eth->scratch_head = kcalloc(cnt, MTK_QDMA_PAGE_SIZE, - GFP_KERNEL); + eth->scratch_head = kcalloc(cnt, MTK_QDMA_PAGE_SIZE, GFP_KERNEL); if (unlikely(!eth->scratch_head)) return -ENOMEM; @@ -831,16 +831,19 @@ static int mtk_init_fq_dma(struct mtk_eth *eth) if (unlikely(dma_mapping_error(eth->dma_dev, dma_addr))) return -ENOMEM; - phy_ring_tail = eth->phy_scratch_ring + - (sizeof(struct mtk_tx_dma) * (cnt - 1)); + phy_ring_tail = eth->phy_scratch_ring + soc->txrx.txd_size * (cnt - 1); for (i = 0; i < cnt; i++) { - eth->scratch_ring[i].txd1 = - (dma_addr + (i * MTK_QDMA_PAGE_SIZE)); + struct mtk_tx_dma *txd; + + txd = (void *)eth->scratch_ring + i * soc->txrx.txd_size; + txd->txd1 = dma_addr + i * MTK_QDMA_PAGE_SIZE; if (i < cnt - 1) - eth->scratch_ring[i].txd2 = (eth->phy_scratch_ring + - ((i + 1) * sizeof(struct mtk_tx_dma))); - eth->scratch_ring[i].txd3 = TX_DMA_SDL(MTK_QDMA_PAGE_SIZE); + txd->txd2 = eth->phy_scratch_ring + + (i + 1) * soc->txrx.txd_size; + + txd->txd3 = TX_DMA_PLEN0(MTK_QDMA_PAGE_SIZE); + txd->txd4 = 0; } mtk_w32(eth, eth->phy_scratch_ring, MTK_QDMA_FQ_HEAD); @@ -2131,6 +2134,7 @@ static int mtk_dma_init(struct mtk_eth *eth) static void mtk_dma_free(struct mtk_eth *eth) { + const struct mtk_soc_data *soc = eth->soc; int i; for (i = 0; i < MTK_MAC_COUNT; i++) @@ -2138,9 +2142,8 @@ static void mtk_dma_free(struct mtk_eth *eth) netdev_reset_queue(eth->netdev[i]); if (eth->scratch_ring) { dma_free_coherent(eth->dma_dev, - MTK_DMA_SIZE * sizeof(struct mtk_tx_dma), - eth->scratch_ring, - eth->phy_scratch_ring); + MTK_DMA_SIZE * soc->txrx.txd_size, + eth->scratch_ring, eth->phy_scratch_ring); eth->scratch_ring = NULL; eth->phy_scratch_ring = 0; } @@ -3373,6 +3376,9 @@ static const struct mtk_soc_data mt2701_data = { .hw_features = MTK_HW_FEATURES, .required_clks = MT7623_CLKS_BITMAP, .required_pctl = true, + .txrx = { + .txd_size = sizeof(struct mtk_tx_dma), + }, }; static const struct mtk_soc_data mt7621_data = { @@ -3381,6 +3387,9 @@ static const struct mtk_soc_data mt7621_data = { .required_clks = MT7621_CLKS_BITMAP, .required_pctl = false, .offload_version = 2, + .txrx = { + .txd_size = sizeof(struct mtk_tx_dma), + }, }; static const struct mtk_soc_data mt7622_data = { @@ -3390,6 +3399,9 @@ static const struct mtk_soc_data mt7622_data = { .required_clks = MT7622_CLKS_BITMAP, .required_pctl = false, .offload_version = 2, + .txrx = { + .txd_size = sizeof(struct mtk_tx_dma), + }, }; static const struct mtk_soc_data mt7623_data = { @@ -3398,6 +3410,9 @@ static const struct mtk_soc_data mt7623_data = { .required_clks = MT7623_CLKS_BITMAP, .required_pctl = true, .offload_version = 2, + .txrx = { + .txd_size = sizeof(struct mtk_tx_dma), + }, }; static const struct mtk_soc_data mt7629_data = { @@ -3406,6 +3421,9 @@ static const struct mtk_soc_data mt7629_data = { .hw_features = MTK_HW_FEATURES, .required_clks = MT7629_CLKS_BITMAP, .required_pctl = false, + .txrx = { + .txd_size = sizeof(struct mtk_tx_dma), + }, }; static const struct mtk_soc_data rt5350_data = { @@ -3413,6 +3431,9 @@ static const struct mtk_soc_data rt5350_data = { .hw_features = MTK_HW_FEATURES_MT7628, .required_clks = MT7628_CLKS_BITMAP, .required_pctl = false, + .txrx = { + .txd_size = sizeof(struct mtk_tx_dma), + }, }; const struct of_device_id of_mtk_match[] = { diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 7e0f2964ac23..7a5ad14b8be6 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -867,6 +867,7 @@ struct mtk_tx_dma_desc_info { * the target SoC * @required_pctl A bool value to show whether the SoC requires * the extra setup for those pins used by GMAC. + * @txd_size Tx DMA descriptor size. */ struct mtk_soc_data { u32 ana_rgc3; @@ -875,6 +876,9 @@ struct mtk_soc_data { bool required_pctl; u8 offload_version; netdev_features_t hw_features; + struct { + u32 txd_size; + } txrx; }; /* currently no SoC has more than 2 macs */ From patchwork Fri May 20 18:11:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 575144 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85B8EC433EF for ; Fri, 20 May 2022 18:12:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352454AbiETSMq (ORCPT ); Fri, 20 May 2022 14:12:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352445AbiETSMk (ORCPT ); Fri, 20 May 2022 14:12:40 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC77518C07D; Fri, 20 May 2022 11:12:38 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4F376617A3; Fri, 20 May 2022 18:12:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D9B98C34116; Fri, 20 May 2022 18:12:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070357; bh=TS0a0jAfKnRA6PI/nGBv67O8H4VESfPde3pAGx7v5zk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BHHDlfau4T2x+jgnBX4Gd8w6TWLWSi/AtYcF4ERKSG0ge92eIyGbTbk5UozajXfCR IRNIY7Wd8jqSmtCbXGTUsf6K+YOMfv4YLHkrrUmUTmYXL0cIbW6ygtv+Su5c2HpFAB q0vfl20aUf181iAJdhoJwfL5Tl9efvGMi50MWckohE1Rtaj83djOFv0mDIkSwTBhc2 M5eTBanK7qVA81+KNR+8kqaMu3PlIo7OAdfanOJ/8lP/ElgGQfnlReYl6I1adCIEmE qeQ5oXJTT2zDsBojdVPqUrAvndTRwKERFn0smql5jjKt5hvcPebWaYKLoEEbYr/Xx9 pBLUfoO0k3C+Q== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 06/16] net: ethernet: mtk_eth_soc: rely on txd_size in mtk_tx_alloc/mtk_tx_clean Date: Fri, 20 May 2022 20:11:29 +0200 Message-Id: X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org This is a preliminary patch to add mt7986 ethernet support. Tested-by: Sam Shih Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 23 ++++++++++++--------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index e00c83982aa9..331814f4801f 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -1592,8 +1592,10 @@ static int mtk_napi_rx(struct napi_struct *napi, int budget) static int mtk_tx_alloc(struct mtk_eth *eth) { + const struct mtk_soc_data *soc = eth->soc; struct mtk_tx_ring *ring = ð->tx_ring; - int i, sz = sizeof(*ring->dma); + int i, sz = soc->txrx.txd_size; + struct mtk_tx_dma *txd; ring->buf = kcalloc(MTK_DMA_SIZE, sizeof(*ring->buf), GFP_KERNEL); @@ -1609,8 +1611,10 @@ static int mtk_tx_alloc(struct mtk_eth *eth) int next = (i + 1) % MTK_DMA_SIZE; u32 next_ptr = ring->phys + next * sz; - ring->dma[i].txd2 = next_ptr; - ring->dma[i].txd3 = TX_DMA_LS0 | TX_DMA_OWNER_CPU; + txd = (void *)ring->dma + i * sz; + txd->txd2 = next_ptr; + txd->txd3 = TX_DMA_LS0 | TX_DMA_OWNER_CPU; + txd->txd4 = 0; } /* On MT7688 (PDMA only) this driver uses the ring->dma structs @@ -1632,7 +1636,7 @@ static int mtk_tx_alloc(struct mtk_eth *eth) ring->dma_size = MTK_DMA_SIZE; atomic_set(&ring->free_count, MTK_DMA_SIZE - 2); ring->next_free = &ring->dma[0]; - ring->last_free = &ring->dma[MTK_DMA_SIZE - 1]; + ring->last_free = (void *)txd; ring->last_free_ptr = (u32)(ring->phys + ((MTK_DMA_SIZE - 1) * sz)); ring->thresh = MAX_SKB_FRAGS; @@ -1665,6 +1669,7 @@ static int mtk_tx_alloc(struct mtk_eth *eth) static void mtk_tx_clean(struct mtk_eth *eth) { + const struct mtk_soc_data *soc = eth->soc; struct mtk_tx_ring *ring = ð->tx_ring; int i; @@ -1677,17 +1682,15 @@ static void mtk_tx_clean(struct mtk_eth *eth) if (ring->dma) { dma_free_coherent(eth->dma_dev, - MTK_DMA_SIZE * sizeof(*ring->dma), - ring->dma, - ring->phys); + MTK_DMA_SIZE * soc->txrx.txd_size, + ring->dma, ring->phys); ring->dma = NULL; } if (ring->dma_pdma) { dma_free_coherent(eth->dma_dev, - MTK_DMA_SIZE * sizeof(*ring->dma_pdma), - ring->dma_pdma, - ring->phys_pdma); + MTK_DMA_SIZE * soc->txrx.txd_size, + ring->dma_pdma, ring->phys_pdma); ring->dma_pdma = NULL; } } From patchwork Fri May 20 18:11:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 574665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4852C433F5 for ; Fri, 20 May 2022 18:12:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352442AbiETSMr (ORCPT ); Fri, 20 May 2022 14:12:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352449AbiETSMo (ORCPT ); Fri, 20 May 2022 14:12:44 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A7FE18DAD3; Fri, 20 May 2022 11:12:42 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 98FC661770; Fri, 20 May 2022 18:12:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2F79CC34118; Fri, 20 May 2022 18:12:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070361; bh=a0+13S2LcTmchzKxjzuZjTsZH/e0iG1iPyCiXmIKIv0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sR5ZzBtQtQtClW1ZLTBVlRUQwpYr/kOTNSaciaVN3MxzvDQpoyxLYjz743DMaJrnL DOgGgSBtQHKYO6vS9LjED6uegGr8D4ZU/wWvMmOQj4vQPFbXbiRJyWujHS0fo6HTmQ 5VgwZCv8UwViju/CCuSw1UkUWG8NuA3LlFKS8IcPtg3qMsame2AapgLV8/TGahV/w+ GG9/CM/bArd1j8RiuXEvcT5EyWvkIOi2gF0CubKaHOP91GA1P3c8fJDjjxgU7PxZQR Jvn1r0is+vn+XStAoVzHKZluDfB9CRcEljAm/wkBvePxT5S/7Le5hEhD2hZQF1h5mx t7BUs/clJ8D3Q== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 07/16] net: ethernet: mtk_eth_soc: rely on txd_size in mtk_desc_to_tx_buf Date: Fri, 20 May 2022 20:11:30 +0200 Message-Id: <1f2d314864f8f3547c8225ef48861ea95cb0abd3.1653069056.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org This is a preliminary patch to add mt7986 ethernet support. Tested-by: Sam Shih Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 26 ++++++++++++--------- 1 file changed, 15 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 331814f4801f..7c314324f929 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -861,10 +861,11 @@ static inline void *mtk_qdma_phys_to_virt(struct mtk_tx_ring *ring, u32 desc) return ret + (desc - ring->phys); } -static inline struct mtk_tx_buf *mtk_desc_to_tx_buf(struct mtk_tx_ring *ring, - struct mtk_tx_dma *txd) +static struct mtk_tx_buf *mtk_desc_to_tx_buf(struct mtk_tx_ring *ring, + struct mtk_tx_dma *txd, + u32 txd_size) { - int idx = txd - ring->dma; + int idx = ((void *)txd - (void *)ring->dma) / txd_size; return &ring->buf[idx]; } @@ -986,6 +987,7 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, }; struct mtk_mac *mac = netdev_priv(dev); struct mtk_eth *eth = mac->hw; + const struct mtk_soc_data *soc = eth->soc; struct mtk_tx_dma *itxd, *txd; struct mtk_tx_dma *itxd_pdma, *txd_pdma; struct mtk_tx_buf *itx_buf, *tx_buf; @@ -997,7 +999,7 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, if (itxd == ring->last_free) return -ENOMEM; - itx_buf = mtk_desc_to_tx_buf(ring, itxd); + itx_buf = mtk_desc_to_tx_buf(ring, itxd, soc->txrx.txd_size); memset(itx_buf, 0, sizeof(*itx_buf)); txd_info.addr = dma_map_single(eth->dma_dev, skb->data, txd_info.size, @@ -1025,7 +1027,7 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, while (frag_size) { bool new_desc = true; - if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA) || + if (MTK_HAS_CAPS(soc->caps, MTK_QDMA) || (i & 0x1)) { txd = mtk_qdma_phys_to_virt(ring, txd->txd2); txd_pdma = qdma_to_pdma(ring, txd); @@ -1049,7 +1051,8 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, mtk_tx_set_dma_desc(dev, txd, &txd_info); - tx_buf = mtk_desc_to_tx_buf(ring, txd); + tx_buf = mtk_desc_to_tx_buf(ring, txd, + soc->txrx.txd_size); if (new_desc) memset(tx_buf, 0, sizeof(*tx_buf)); tx_buf->skb = (struct sk_buff *)MTK_DMA_DUMMY_DESC; @@ -1068,7 +1071,7 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, /* store skb to cleanup */ itx_buf->skb = skb; - if (!MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) { + if (!MTK_HAS_CAPS(soc->caps, MTK_QDMA)) { if (k & 0x1) txd_pdma->txd2 |= TX_DMA_LS0; else @@ -1086,7 +1089,7 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, */ wmb(); - if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) { + if (MTK_HAS_CAPS(soc->caps, MTK_QDMA)) { if (netif_xmit_stopped(netdev_get_tx_queue(dev, 0)) || !netdev_xmit_more()) mtk_w32(eth, txd->txd2, MTK_QTX_CTX_PTR); @@ -1100,13 +1103,13 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, err_dma: do { - tx_buf = mtk_desc_to_tx_buf(ring, itxd); + tx_buf = mtk_desc_to_tx_buf(ring, itxd, soc->txrx.txd_size); /* unmap dma */ mtk_tx_unmap(eth, tx_buf, false); itxd->txd3 = TX_DMA_LS0 | TX_DMA_OWNER_CPU; - if (!MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) + if (!MTK_HAS_CAPS(soc->caps, MTK_QDMA)) itxd_pdma->txd2 = TX_DMA_DESP2_DEF; itxd = mtk_qdma_phys_to_virt(ring, itxd->txd2); @@ -1417,7 +1420,8 @@ static int mtk_poll_tx_qdma(struct mtk_eth *eth, int budget, if ((desc->txd3 & TX_DMA_OWNER_CPU) == 0) break; - tx_buf = mtk_desc_to_tx_buf(ring, desc); + tx_buf = mtk_desc_to_tx_buf(ring, desc, + eth->soc->txrx.txd_size); if (tx_buf->flags & MTK_TX_FLAGS_FPORT1) mac = 1; From patchwork Fri May 20 18:11:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 575143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85D11C433FE for ; Fri, 20 May 2022 18:12:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352449AbiETSMs (ORCPT ); Fri, 20 May 2022 14:12:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352447AbiETSMr (ORCPT ); Fri, 20 May 2022 14:12:47 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C380D18C07B; Fri, 20 May 2022 11:12:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 825F7B82D91; Fri, 20 May 2022 18:12:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 75B28C34114; Fri, 20 May 2022 18:12:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070364; bh=lQrsokyRSn+VHAOZ8sZtPBVYQaol9to5OyjMtUmuk0A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C0xhMFa+36PIYrIj5a3FRj8GjO0WWdxpv+Io8Ri4hgv/AP36haU32u/CarCLMApB+ xRza+XW2zjuzXlKMYHEMlM2yx9pc9rxg8r1E+iVHhrb3nuIL8/k42ABIwdZp7Hv6Yf MtR4T86nHf+WiF9OrZAiLPHumNMWF3XMTsgH5/EpdhJH+w/7k0fGl5gu+wFxrnu8hS 4HFFAF7OY4/t0if8V4x2nMyodz1WX1mSHI5pTydB9M/Cm+mLefq4U9xFCPETplnaUw LkUaAl7eQwNkVZ+vT6BzBHIeDi59jHu0J2NvcOI8cl/lVEGtqernyNjvAkmOhq+v/5 VG9bvVImuPEFQ== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 08/16] net: ethernet: mtk_eth_soc: rely on txd_size in txd_to_idx Date: Fri, 20 May 2022 20:11:31 +0200 Message-Id: <04420b6dbf1982edf53fae52a6b738ac391cb5d9.1653069056.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org This is a preliminary patch to add mt7986 ethernet support. Tested-by: Sam Shih Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 7c314324f929..1bf5edd9eb44 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -876,9 +876,10 @@ static struct mtk_tx_dma *qdma_to_pdma(struct mtk_tx_ring *ring, return ring->dma_pdma - ring->dma + dma; } -static int txd_to_idx(struct mtk_tx_ring *ring, struct mtk_tx_dma *dma) +static int txd_to_idx(struct mtk_tx_ring *ring, struct mtk_tx_dma *dma, + u32 txd_size) { - return ((void *)dma - (void *)ring->dma) / sizeof(*dma); + return ((void *)dma - (void *)ring->dma) / txd_size; } static void mtk_tx_unmap(struct mtk_eth *eth, struct mtk_tx_buf *tx_buf, @@ -1094,8 +1095,10 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, !netdev_xmit_more()) mtk_w32(eth, txd->txd2, MTK_QTX_CTX_PTR); } else { - int next_idx = NEXT_DESP_IDX(txd_to_idx(ring, txd), - ring->dma_size); + int next_idx; + + next_idx = NEXT_DESP_IDX(txd_to_idx(ring, txd, soc->txrx.txd_size), + ring->dma_size); mtk_w32(eth, next_idx, MT7628_TX_CTX_IDX0); } From patchwork Fri May 20 18:11:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 574664 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83A8AC433FE for ; Fri, 20 May 2022 18:12:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352460AbiETSMx (ORCPT ); Fri, 20 May 2022 14:12:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352455AbiETSMv (ORCPT ); Fri, 20 May 2022 14:12:51 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B69F18DAC9; Fri, 20 May 2022 11:12:50 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id CB8C2B82D90; Fri, 20 May 2022 18:12:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C07A1C385A9; Fri, 20 May 2022 18:12:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070367; bh=+IZMGA7q81QWtnP8o5Y0D9cC9JaYvk8r7G2oLMll/EY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YcGn28r1VbhZYFeRVT6nbcD/5vCLR2yOhdLo1tEeYR3QLGosuBEOsawjoSuTa6rUm pKjenIwjoUho4Elkv7FuzWjKQqscx4jzTGVZgMMv2Jo+fZMnbW0kcwpb2aqYGVSAPV nmqqcyboGocxNLGq3mIlbm8ofzIwVp5JFHg0gRkPtjIWZuXmj7DggQ11NvZ4m80oKN rIua+B6XQATdEYVcyw0SntcLEx7lTa3CJrjz5nnHTxn2881+5Q+wYw0sa6wnD8XBfu uQAxUF9+Faaua52J22jC5ToHX5NHtevkMxAf6JFPwLuvOk71Fy5464c9KH8hgqlQTo k4Euu5Bcj5Igw== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 09/16] net: ethernet: mtk_eth_soc: add rxd_size to mtk_soc_data Date: Fri, 20 May 2022 20:11:32 +0200 Message-Id: <970f29fe5af213e643e1a5e6aec4006f1d6d693c.1653069056.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Similar to tx counterpart, introduce rxd_size in mtk_soc_data data structure. This is a preliminary patch to add mt7986 ethernet support. Tested-by: Sam Shih Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 13 +++++++++---- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 2 ++ 2 files changed, 11 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 1bf5edd9eb44..7c4e63cc7c2a 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -1740,7 +1740,7 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) } ring->dma = dma_alloc_coherent(eth->dma_dev, - rx_dma_size * sizeof(*ring->dma), + rx_dma_size * eth->soc->txrx.rxd_size, &ring->phys, GFP_KERNEL); if (!ring->dma) return -ENOMEM; @@ -1798,9 +1798,8 @@ static void mtk_rx_clean(struct mtk_eth *eth, struct mtk_rx_ring *ring) if (ring->dma) { dma_free_coherent(eth->dma_dev, - ring->dma_size * sizeof(*ring->dma), - ring->dma, - ring->phys); + ring->dma_size * eth->soc->txrx.rxd_size, + ring->dma, ring->phys); ring->dma = NULL; } } @@ -3388,6 +3387,7 @@ static const struct mtk_soc_data mt2701_data = { .required_pctl = true, .txrx = { .txd_size = sizeof(struct mtk_tx_dma), + .rxd_size = sizeof(struct mtk_rx_dma), }, }; @@ -3399,6 +3399,7 @@ static const struct mtk_soc_data mt7621_data = { .offload_version = 2, .txrx = { .txd_size = sizeof(struct mtk_tx_dma), + .rxd_size = sizeof(struct mtk_rx_dma), }, }; @@ -3411,6 +3412,7 @@ static const struct mtk_soc_data mt7622_data = { .offload_version = 2, .txrx = { .txd_size = sizeof(struct mtk_tx_dma), + .rxd_size = sizeof(struct mtk_rx_dma), }, }; @@ -3422,6 +3424,7 @@ static const struct mtk_soc_data mt7623_data = { .offload_version = 2, .txrx = { .txd_size = sizeof(struct mtk_tx_dma), + .rxd_size = sizeof(struct mtk_rx_dma), }, }; @@ -3433,6 +3436,7 @@ static const struct mtk_soc_data mt7629_data = { .required_pctl = false, .txrx = { .txd_size = sizeof(struct mtk_tx_dma), + .rxd_size = sizeof(struct mtk_rx_dma), }, }; @@ -3443,6 +3447,7 @@ static const struct mtk_soc_data rt5350_data = { .required_pctl = false, .txrx = { .txd_size = sizeof(struct mtk_tx_dma), + .rxd_size = sizeof(struct mtk_rx_dma), }, }; diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 7a5ad14b8be6..dcbf4b5c70e0 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -868,6 +868,7 @@ struct mtk_tx_dma_desc_info { * @required_pctl A bool value to show whether the SoC requires * the extra setup for those pins used by GMAC. * @txd_size Tx DMA descriptor size. + * @rxd_size Rx DMA descriptor size. */ struct mtk_soc_data { u32 ana_rgc3; @@ -878,6 +879,7 @@ struct mtk_soc_data { netdev_features_t hw_features; struct { u32 txd_size; + u32 rxd_size; } txrx; }; From patchwork Fri May 20 18:11:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 575142 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A524FC433EF for ; Fri, 20 May 2022 18:13:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352457AbiETSNG (ORCPT ); Fri, 20 May 2022 14:13:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352463AbiETSNE (ORCPT ); Fri, 20 May 2022 14:13:04 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72D8318DAF2; Fri, 20 May 2022 11:12:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 29B05B82D90; Fri, 20 May 2022 18:12:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 150D3C3411A; Fri, 20 May 2022 18:12:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070370; bh=K+DrRmpRVXepET5twu9KU4kHTQjdvJrEkgES8lPb8SM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=di7nb890nAdVs661MY3ArhaorebCvREmDch9fcVeCypR5Y79eINRqbbEkmyLdySv8 s1TkRpOL8iZV2jYbUEPl2SEJH0w+TYSPb3wY6WrMayh0D+keTVkiX4t2ojd5nKUYGh tIE9AYATvi1vkjxAbcmXDTvDtQvFZfmQWhnro7y1x4YK0AgNaIbAq7ta5Ic7MMegLQ yhos6Lg1FPKn/rNn+9bvuDbVbzGvDTsWSayu5fYK6uEqTrDrHYsG7x3jbs+5y9vW7L 80poTZrfVeodcasojYGiPoG3PHvsVQ/cpqn8bRL/hWguotzMTgCt1MTIDII7WqV8+W lvtXgF8eU9xMA== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 10/16] net: ethernet: mtk_eth_soc: rely on txd_size field in mtk_poll_tx/mtk_poll_rx Date: Fri, 20 May 2022 20:11:33 +0200 Message-Id: <6b20d0ac8cde94c6f5057004a60bde2b3ae37702.1653069056.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org This is a preliminary to ad mt7986 ethernet support. Tested-by: Sam Shih Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 7c4e63cc7c2a..c7820dbc75f1 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -1235,9 +1235,12 @@ static struct mtk_rx_ring *mtk_get_rx_ring(struct mtk_eth *eth) return ð->rx_ring[0]; for (i = 0; i < MTK_MAX_RX_RING_NUM; i++) { + struct mtk_rx_dma *rxd; + ring = ð->rx_ring[i]; idx = NEXT_DESP_IDX(ring->calc_idx, ring->dma_size); - if (ring->dma[idx].rxd2 & RX_DMA_DONE) { + rxd = (void *)ring->dma + idx * eth->soc->txrx.rxd_size; + if (rxd->rxd2 & RX_DMA_DONE) { ring->calc_idx_update = true; return ring; } @@ -1288,7 +1291,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, goto rx_done; idx = NEXT_DESP_IDX(ring->calc_idx, ring->dma_size); - rxd = &ring->dma[idx]; + rxd = (void *)ring->dma + idx * eth->soc->txrx.rxd_size; data = ring->data[idx]; if (!mtk_rx_get_desc(&trxd, rxd)) @@ -1477,7 +1480,7 @@ static int mtk_poll_tx_pdma(struct mtk_eth *eth, int budget, mtk_tx_unmap(eth, tx_buf, true); - desc = &ring->dma[cpu]; + desc = (void *)ring->dma + cpu * eth->soc->txrx.txd_size; ring->last_free = desc; atomic_inc(&ring->free_count); From patchwork Fri May 20 18:11:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 574662 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69B8BC43219 for ; Fri, 20 May 2022 18:13:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242811AbiETSNM (ORCPT ); Fri, 20 May 2022 14:13:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352470AbiETSNE (ORCPT ); Fri, 20 May 2022 14:13:04 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC7AE18FF01; Fri, 20 May 2022 11:12:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 788AEB82D93; Fri, 20 May 2022 18:12:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 60E3CC34117; Fri, 20 May 2022 18:12:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070374; bh=jFr/C+je3NokkqyXC7uJY0JiG6vYtPSvT2wCEXT3pnc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RMp3FfJHzDDvYejKvhgrI5W11mOFuK37cKFiNHYES5trjqGFFkXFqUwJZ+9kl/eZn zsSxvN+UPifjI5zJmjufHgN/Cqq3y70m44zR9p/NVc6sy9746qELUEYO7EfkZNjNLG PiiUASaaC0ngZPH6UxKqVyTjnqtZLKZm+pjDa1V9oo6ZY3B7u8GeQSgpHSDKeDR6jB 1ph+9HHGko7znyTsu2bStguFFY1goK+fn0BRej2OqD3Wme81FSFlYF/5WCRRGcq24L 4lrbCNe6igSTNSLkZzpcdVsoB/qfq5opoZGbn51tSZLXXax3oIAGFvqM4gkruMtwW3 5OKKSOiZpiQWA== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 11/16] net: ethernet: mtk_eth_soc: rely on rxd_size field in mtk_rx_alloc/mtk_rx_clean Date: Fri, 20 May 2022 20:11:34 +0200 Message-Id: <1be6a4f04eca440c610a9f22808cfd4062d1c096.1653069056.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Remove mtk_rx_dma structure layout dependency in mtk_rx_alloc/mtk_rx_clean. Initialize to 0 rxd3 and rxd4 in mtk_rx_alloc. This is a preliminary patch to add mt7986 ethernet support. Tested-by: Sam Shih Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 26 ++++++++++++++------- 1 file changed, 18 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index c7820dbc75f1..4706e3708bbc 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -1749,18 +1749,25 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) return -ENOMEM; for (i = 0; i < rx_dma_size; i++) { + struct mtk_rx_dma *rxd; + dma_addr_t dma_addr = dma_map_single(eth->dma_dev, ring->data[i] + NET_SKB_PAD + eth->ip_align, ring->buf_size, DMA_FROM_DEVICE); if (unlikely(dma_mapping_error(eth->dma_dev, dma_addr))) return -ENOMEM; - ring->dma[i].rxd1 = (unsigned int)dma_addr; + + rxd = (void *)ring->dma + i * eth->soc->txrx.rxd_size; + rxd->rxd1 = (unsigned int)dma_addr; if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) - ring->dma[i].rxd2 = RX_DMA_LSO; + rxd->rxd2 = RX_DMA_LSO; else - ring->dma[i].rxd2 = RX_DMA_PLEN0(ring->buf_size); + rxd->rxd2 = RX_DMA_PLEN0(ring->buf_size); + + rxd->rxd3 = 0; + rxd->rxd4 = 0; } ring->dma_size = rx_dma_size; ring->calc_idx_update = false; @@ -1785,14 +1792,17 @@ static void mtk_rx_clean(struct mtk_eth *eth, struct mtk_rx_ring *ring) if (ring->data && ring->dma) { for (i = 0; i < ring->dma_size; i++) { + struct mtk_rx_dma *rxd; + if (!ring->data[i]) continue; - if (!ring->dma[i].rxd1) + + rxd = (void *)ring->dma + i * eth->soc->txrx.rxd_size; + if (!rxd->rxd1) continue; - dma_unmap_single(eth->dma_dev, - ring->dma[i].rxd1, - ring->buf_size, - DMA_FROM_DEVICE); + + dma_unmap_single(eth->dma_dev, rxd->rxd1, + ring->buf_size, DMA_FROM_DEVICE); skb_free_frag(ring->data[i]); } kfree(ring->data); From patchwork Fri May 20 18:11:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 574661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 161C8C4167D for ; Fri, 20 May 2022 18:13:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352481AbiETSNN (ORCPT ); Fri, 20 May 2022 14:13:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352469AbiETSNE (ORCPT ); Fri, 20 May 2022 14:13:04 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71CFD18FF03; Fri, 20 May 2022 11:13:00 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 03F2CB82D77; Fri, 20 May 2022 18:12:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AE066C34114; Fri, 20 May 2022 18:12:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070377; bh=89mGVCDs5WpPImVKuyWK4ZWdnVhr/6Da/1I72VS/o08=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uWLRrOWcJPSZqdE9jKeTTiJOK3IBettYk9qRMD40Iby/0/8AUbvKYvSo3pqgKQyxA NaKsfwH/xXo/+6gave4qBggOfwA3iSmlKWx9/JvUJM8e741cPlkm50h0RO896QF3YJ 3mwQTbwqT5ilNHwnttJG3pAHIZfBaFZfX9yJGXq6zcyZz0dPxYCmYqTpGr5OU05BXK Mp7WTjZkM9XqRHv5iehG6eT1mP0aj4vAoMnLzJmfTbTohSkevqQtj0LfHu8N/mTKlN oap1iZYKvZvOAHYwwMYWTQiMQ3Jf0ogLgPyfwUopoK8Fvoz+jDbsYSOH5bK5fAp0JU cRa+MqQH8WedA== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 12/16] net: ethernet: mtk_eth_soc: introduce device register map Date: Fri, 20 May 2022 20:11:35 +0200 Message-Id: <96dc79ff93df47be7f290387f387ed72c5ecfec5.1653069056.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Introduce reg_map structure to add the capability to support different register definitions. Move register definitions in mtk_regmap structure. This is a preliminary patch to introduce mt7986 ethernet support. Tested-by: Sam Shih Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 223 +++++++++++++------- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 146 ++++--------- 2 files changed, 188 insertions(+), 181 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 4706e3708bbc..503829a9c270 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -34,6 +34,59 @@ MODULE_PARM_DESC(msg_level, "Message level (-1=defaults,0=none,...,16=all)"); #define MTK_ETHTOOL_STAT(x) { #x, \ offsetof(struct mtk_hw_stats, x) / sizeof(u64) } +static const struct mtk_reg_map mtk_reg_map = { + .tx_irq_mask = 0x1a1c, + .tx_irq_status = 0x1a18, + .pdma = { + .rx_ptr = 0x0900, + .rx_cnt_cfg = 0x0904, + .pcrx_ptr = 0x0908, + .glo_cfg = 0x0a04, + .rst_idx = 0x0a08, + .delay_irq = 0x0a0c, + .irq_status = 0x0a20, + .irq_mask = 0x0a28, + .int_grp = 0x0a50, + }, + .qdma = { + .qtx_cfg = 0x1800, + .rx_ptr = 0x1900, + .rx_cnt_cfg = 0x1904, + .qcrx_ptr = 0x1908, + .glo_cfg = 0x1a04, + .rst_idx = 0x1a08, + .delay_irq = 0x1a0c, + .fc_th = 0x1a10, + .int_grp = 0x1a20, + .hred = 0x1a44, + .ctx_ptr = 0x1b00, + .dtx_ptr = 0x1b04, + .crx_ptr = 0x1b10, + .drx_ptr = 0x1b14, + .fq_head = 0x1b20, + .fq_tail = 0x1b24, + .fq_count = 0x1b28, + .fq_blen = 0x1b2c, + }, + .gdm1_cnt = 0x2400, +}; + +static const struct mtk_reg_map mt7628_reg_map = { + .tx_irq_mask = 0x0a28, + .tx_irq_status = 0x0a20, + .pdma = { + .rx_ptr = 0x0900, + .rx_cnt_cfg = 0x0904, + .pcrx_ptr = 0x0908, + .glo_cfg = 0x0a04, + .rst_idx = 0x0a08, + .delay_irq = 0x0a0c, + .irq_status = 0x0a20, + .irq_mask = 0x0a28, + .int_grp = 0x0a50, + }, +}; + /* strings used by ethtool */ static const struct mtk_ethtool_stats { char str[ETH_GSTRING_LEN]; @@ -600,8 +653,8 @@ static inline void mtk_tx_irq_disable(struct mtk_eth *eth, u32 mask) u32 val; spin_lock_irqsave(ð->tx_irq_lock, flags); - val = mtk_r32(eth, eth->tx_int_mask_reg); - mtk_w32(eth, val & ~mask, eth->tx_int_mask_reg); + val = mtk_r32(eth, eth->soc->reg_map->tx_irq_mask); + mtk_w32(eth, val & ~mask, eth->soc->reg_map->tx_irq_mask); spin_unlock_irqrestore(ð->tx_irq_lock, flags); } @@ -611,8 +664,8 @@ static inline void mtk_tx_irq_enable(struct mtk_eth *eth, u32 mask) u32 val; spin_lock_irqsave(ð->tx_irq_lock, flags); - val = mtk_r32(eth, eth->tx_int_mask_reg); - mtk_w32(eth, val | mask, eth->tx_int_mask_reg); + val = mtk_r32(eth, eth->soc->reg_map->tx_irq_mask); + mtk_w32(eth, val | mask, eth->soc->reg_map->tx_irq_mask); spin_unlock_irqrestore(ð->tx_irq_lock, flags); } @@ -622,8 +675,8 @@ static inline void mtk_rx_irq_disable(struct mtk_eth *eth, u32 mask) u32 val; spin_lock_irqsave(ð->rx_irq_lock, flags); - val = mtk_r32(eth, MTK_PDMA_INT_MASK); - mtk_w32(eth, val & ~mask, MTK_PDMA_INT_MASK); + val = mtk_r32(eth, eth->soc->reg_map->pdma.irq_mask); + mtk_w32(eth, val & ~mask, eth->soc->reg_map->pdma.irq_mask); spin_unlock_irqrestore(ð->rx_irq_lock, flags); } @@ -633,8 +686,8 @@ static inline void mtk_rx_irq_enable(struct mtk_eth *eth, u32 mask) u32 val; spin_lock_irqsave(ð->rx_irq_lock, flags); - val = mtk_r32(eth, MTK_PDMA_INT_MASK); - mtk_w32(eth, val | mask, MTK_PDMA_INT_MASK); + val = mtk_r32(eth, eth->soc->reg_map->pdma.irq_mask); + mtk_w32(eth, val | mask, eth->soc->reg_map->pdma.irq_mask); spin_unlock_irqrestore(ð->rx_irq_lock, flags); } @@ -685,39 +738,39 @@ void mtk_stats_update_mac(struct mtk_mac *mac) hw_stats->rx_checksum_errors += mtk_r32(mac->hw, MT7628_SDM_CS_ERR); } else { + const struct mtk_reg_map *reg_map = eth->soc->reg_map; unsigned int offs = hw_stats->reg_offset; u64 stats; - hw_stats->rx_bytes += mtk_r32(mac->hw, - MTK_GDM1_RX_GBCNT_L + offs); - stats = mtk_r32(mac->hw, MTK_GDM1_RX_GBCNT_H + offs); + hw_stats->rx_bytes += mtk_r32(mac->hw, reg_map->gdm1_cnt + offs); + stats = mtk_r32(mac->hw, reg_map->gdm1_cnt + 0x4 + offs); if (stats) hw_stats->rx_bytes += (stats << 32); hw_stats->rx_packets += - mtk_r32(mac->hw, MTK_GDM1_RX_GPCNT + offs); + mtk_r32(mac->hw, reg_map->gdm1_cnt + 0x8 + offs); hw_stats->rx_overflow += - mtk_r32(mac->hw, MTK_GDM1_RX_OERCNT + offs); + mtk_r32(mac->hw, reg_map->gdm1_cnt + 0x10 + offs); hw_stats->rx_fcs_errors += - mtk_r32(mac->hw, MTK_GDM1_RX_FERCNT + offs); + mtk_r32(mac->hw, reg_map->gdm1_cnt + 0x14 + offs); hw_stats->rx_short_errors += - mtk_r32(mac->hw, MTK_GDM1_RX_SERCNT + offs); + mtk_r32(mac->hw, reg_map->gdm1_cnt + 0x18 + offs); hw_stats->rx_long_errors += - mtk_r32(mac->hw, MTK_GDM1_RX_LENCNT + offs); + mtk_r32(mac->hw, reg_map->gdm1_cnt + 0x1c + offs); hw_stats->rx_checksum_errors += - mtk_r32(mac->hw, MTK_GDM1_RX_CERCNT + offs); + mtk_r32(mac->hw, reg_map->gdm1_cnt + 0x20 + offs); hw_stats->rx_flow_control_packets += - mtk_r32(mac->hw, MTK_GDM1_RX_FCCNT + offs); + mtk_r32(mac->hw, reg_map->gdm1_cnt + 0x24 + offs); hw_stats->tx_skip += - mtk_r32(mac->hw, MTK_GDM1_TX_SKIPCNT + offs); + mtk_r32(mac->hw, reg_map->gdm1_cnt + 0x28 + offs); hw_stats->tx_collisions += - mtk_r32(mac->hw, MTK_GDM1_TX_COLCNT + offs); + mtk_r32(mac->hw, reg_map->gdm1_cnt + 0x2c + offs); hw_stats->tx_bytes += - mtk_r32(mac->hw, MTK_GDM1_TX_GBCNT_L + offs); - stats = mtk_r32(mac->hw, MTK_GDM1_TX_GBCNT_H + offs); + mtk_r32(mac->hw, reg_map->gdm1_cnt + 0x30 + offs); + stats = mtk_r32(mac->hw, reg_map->gdm1_cnt + 0x34 + offs); if (stats) hw_stats->tx_bytes += (stats << 32); hw_stats->tx_packets += - mtk_r32(mac->hw, MTK_GDM1_TX_GPCNT + offs); + mtk_r32(mac->hw, reg_map->gdm1_cnt + 0x38 + offs); } u64_stats_update_end(&hw_stats->syncp); @@ -846,10 +899,10 @@ static int mtk_init_fq_dma(struct mtk_eth *eth) txd->txd4 = 0; } - mtk_w32(eth, eth->phy_scratch_ring, MTK_QDMA_FQ_HEAD); - mtk_w32(eth, phy_ring_tail, MTK_QDMA_FQ_TAIL); - mtk_w32(eth, (cnt << 16) | cnt, MTK_QDMA_FQ_CNT); - mtk_w32(eth, MTK_QDMA_PAGE_SIZE << 16, MTK_QDMA_FQ_BLEN); + mtk_w32(eth, eth->phy_scratch_ring, soc->reg_map->qdma.fq_head); + mtk_w32(eth, phy_ring_tail, soc->reg_map->qdma.fq_tail); + mtk_w32(eth, (cnt << 16) | cnt, soc->reg_map->qdma.fq_count); + mtk_w32(eth, MTK_QDMA_PAGE_SIZE << 16, soc->reg_map->qdma.fq_blen); return 0; } @@ -1093,7 +1146,7 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, if (MTK_HAS_CAPS(soc->caps, MTK_QDMA)) { if (netif_xmit_stopped(netdev_get_tx_queue(dev, 0)) || !netdev_xmit_more()) - mtk_w32(eth, txd->txd2, MTK_QTX_CTX_PTR); + mtk_w32(eth, txd->txd2, soc->reg_map->qdma.ctx_ptr); } else { int next_idx; @@ -1407,6 +1460,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, static int mtk_poll_tx_qdma(struct mtk_eth *eth, int budget, unsigned int *done, unsigned int *bytes) { + const struct mtk_reg_map *reg_map = eth->soc->reg_map; struct mtk_tx_ring *ring = ð->tx_ring; struct mtk_tx_dma *desc; struct sk_buff *skb; @@ -1414,7 +1468,7 @@ static int mtk_poll_tx_qdma(struct mtk_eth *eth, int budget, u32 cpu, dma; cpu = ring->last_free_ptr; - dma = mtk_r32(eth, MTK_QTX_DRX_PTR); + dma = mtk_r32(eth, reg_map->qdma.drx_ptr); desc = mtk_qdma_phys_to_virt(ring, cpu); @@ -1449,7 +1503,7 @@ static int mtk_poll_tx_qdma(struct mtk_eth *eth, int budget, } ring->last_free_ptr = cpu; - mtk_w32(eth, cpu, MTK_QTX_CRX_PTR); + mtk_w32(eth, cpu, reg_map->qdma.crx_ptr); return budget; } @@ -1542,24 +1596,25 @@ static void mtk_handle_status_irq(struct mtk_eth *eth) static int mtk_napi_tx(struct napi_struct *napi, int budget) { struct mtk_eth *eth = container_of(napi, struct mtk_eth, tx_napi); + const struct mtk_reg_map *reg_map = eth->soc->reg_map; int tx_done = 0; if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) mtk_handle_status_irq(eth); - mtk_w32(eth, MTK_TX_DONE_INT, eth->tx_int_status_reg); + mtk_w32(eth, MTK_TX_DONE_INT, reg_map->tx_irq_status); tx_done = mtk_poll_tx(eth, budget); if (unlikely(netif_msg_intr(eth))) { dev_info(eth->dev, "done tx %d, intr 0x%08x/0x%x\n", tx_done, - mtk_r32(eth, eth->tx_int_status_reg), - mtk_r32(eth, eth->tx_int_mask_reg)); + mtk_r32(eth, reg_map->tx_irq_status), + mtk_r32(eth, reg_map->tx_irq_mask)); } if (tx_done == budget) return budget; - if (mtk_r32(eth, eth->tx_int_status_reg) & MTK_TX_DONE_INT) + if (mtk_r32(eth, reg_map->tx_irq_status) & MTK_TX_DONE_INT) return budget; if (napi_complete_done(napi, tx_done)) @@ -1571,6 +1626,7 @@ static int mtk_napi_tx(struct napi_struct *napi, int budget) static int mtk_napi_rx(struct napi_struct *napi, int budget) { struct mtk_eth *eth = container_of(napi, struct mtk_eth, rx_napi); + const struct mtk_reg_map *reg_map = eth->soc->reg_map; int rx_done_total = 0; mtk_handle_status_irq(eth); @@ -1578,21 +1634,21 @@ static int mtk_napi_rx(struct napi_struct *napi, int budget) do { int rx_done; - mtk_w32(eth, MTK_RX_DONE_INT, MTK_PDMA_INT_STATUS); + mtk_w32(eth, MTK_RX_DONE_INT, reg_map->pdma.irq_status); rx_done = mtk_poll_rx(napi, budget - rx_done_total, eth); rx_done_total += rx_done; if (unlikely(netif_msg_intr(eth))) { dev_info(eth->dev, "done rx %d, intr 0x%08x/0x%x\n", rx_done, - mtk_r32(eth, MTK_PDMA_INT_STATUS), - mtk_r32(eth, MTK_PDMA_INT_MASK)); + mtk_r32(eth, reg_map->pdma.irq_status), + mtk_r32(eth, reg_map->pdma.irq_mask)); } if (rx_done_total == budget) return budget; - } while (mtk_r32(eth, MTK_PDMA_INT_STATUS) & MTK_RX_DONE_INT); + } while (mtk_r32(eth, reg_map->pdma.irq_status) & MTK_RX_DONE_INT); if (napi_complete_done(napi, rx_done_total)) mtk_rx_irq_enable(eth, MTK_RX_DONE_INT); @@ -1655,20 +1711,20 @@ static int mtk_tx_alloc(struct mtk_eth *eth) */ wmb(); - if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) { - mtk_w32(eth, ring->phys, MTK_QTX_CTX_PTR); - mtk_w32(eth, ring->phys, MTK_QTX_DTX_PTR); + if (MTK_HAS_CAPS(soc->caps, MTK_QDMA)) { + mtk_w32(eth, ring->phys, soc->reg_map->qdma.ctx_ptr); + mtk_w32(eth, ring->phys, soc->reg_map->qdma.dtx_ptr); mtk_w32(eth, ring->phys + ((MTK_DMA_SIZE - 1) * sz), - MTK_QTX_CRX_PTR); - mtk_w32(eth, ring->last_free_ptr, MTK_QTX_DRX_PTR); + soc->reg_map->qdma.crx_ptr); + mtk_w32(eth, ring->last_free_ptr, soc->reg_map->qdma.drx_ptr); mtk_w32(eth, (QDMA_RES_THRES << 8) | QDMA_RES_THRES, - MTK_QTX_CFG(0)); + soc->reg_map->qdma.qtx_cfg); } else { mtk_w32(eth, ring->phys_pdma, MT7628_TX_BASE_PTR0); mtk_w32(eth, MTK_DMA_SIZE, MT7628_TX_MAX_CNT0); mtk_w32(eth, 0, MT7628_TX_CTX_IDX0); - mtk_w32(eth, MT7628_PST_DTX_IDX0, MTK_PDMA_RST_IDX); + mtk_w32(eth, MT7628_PST_DTX_IDX0, soc->reg_map->pdma.rst_idx); } return 0; @@ -1707,6 +1763,7 @@ static void mtk_tx_clean(struct mtk_eth *eth) static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) { + const struct mtk_reg_map *reg_map = eth->soc->reg_map; struct mtk_rx_ring *ring; int rx_data_len, rx_dma_size; int i; @@ -1772,16 +1829,18 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) ring->dma_size = rx_dma_size; ring->calc_idx_update = false; ring->calc_idx = rx_dma_size - 1; - ring->crx_idx_reg = MTK_PRX_CRX_IDX_CFG(ring_no); + ring->crx_idx_reg = reg_map->pdma.pcrx_ptr + ring_no * MTK_QRX_OFFSET; /* make sure that all changes to the dma ring are flushed before we * continue */ wmb(); - mtk_w32(eth, ring->phys, MTK_PRX_BASE_PTR_CFG(ring_no) + offset); - mtk_w32(eth, rx_dma_size, MTK_PRX_MAX_CNT_CFG(ring_no) + offset); + mtk_w32(eth, ring->phys, + reg_map->pdma.rx_ptr + ring_no * MTK_QRX_OFFSET + offset); + mtk_w32(eth, rx_dma_size, + reg_map->pdma.rx_cnt_cfg + ring_no * MTK_QRX_OFFSET + offset); mtk_w32(eth, ring->calc_idx, ring->crx_idx_reg + offset); - mtk_w32(eth, MTK_PST_DRX_IDX_CFG(ring_no), MTK_PDMA_RST_IDX + offset); + mtk_w32(eth, MTK_PST_DRX_IDX_CFG(ring_no), reg_map->pdma.rst_idx + offset); return 0; } @@ -2087,9 +2146,9 @@ static int mtk_dma_busy_wait(struct mtk_eth *eth) u32 val; if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) - reg = MTK_QDMA_GLO_CFG; + reg = eth->soc->reg_map->qdma.glo_cfg; else - reg = MTK_PDMA_GLO_CFG; + reg = eth->soc->reg_map->pdma.glo_cfg; ret = readx_poll_timeout_atomic(__raw_readl, eth->base + reg, val, !(val & (MTK_RX_DMA_BUSY | MTK_TX_DMA_BUSY)), @@ -2147,8 +2206,8 @@ static int mtk_dma_init(struct mtk_eth *eth) * automatically */ mtk_w32(eth, FC_THRES_DROP_MODE | FC_THRES_DROP_EN | - FC_THRES_MIN, MTK_QDMA_FC_THRES); - mtk_w32(eth, 0x0, MTK_QDMA_HRED2); + FC_THRES_MIN, eth->soc->reg_map->qdma.fc_th); + mtk_w32(eth, 0x0, eth->soc->reg_map->qdma.hred); } return 0; @@ -2222,13 +2281,14 @@ static irqreturn_t mtk_handle_irq_tx(int irq, void *_eth) static irqreturn_t mtk_handle_irq(int irq, void *_eth) { struct mtk_eth *eth = _eth; + const struct mtk_reg_map *reg_map = eth->soc->reg_map; - if (mtk_r32(eth, MTK_PDMA_INT_MASK) & MTK_RX_DONE_INT) { - if (mtk_r32(eth, MTK_PDMA_INT_STATUS) & MTK_RX_DONE_INT) + if (mtk_r32(eth, reg_map->pdma.irq_mask) & MTK_RX_DONE_INT) { + if (mtk_r32(eth, reg_map->pdma.irq_status) & MTK_RX_DONE_INT) mtk_handle_irq_rx(irq, _eth); } - if (mtk_r32(eth, eth->tx_int_mask_reg) & MTK_TX_DONE_INT) { - if (mtk_r32(eth, eth->tx_int_status_reg) & MTK_TX_DONE_INT) + if (mtk_r32(eth, reg_map->tx_irq_mask) & MTK_TX_DONE_INT) { + if (mtk_r32(eth, reg_map->tx_irq_status) & MTK_TX_DONE_INT) mtk_handle_irq_tx(irq, _eth); } @@ -2252,6 +2312,7 @@ static void mtk_poll_controller(struct net_device *dev) static int mtk_start_dma(struct mtk_eth *eth) { u32 rx_2b_offset = (NET_IP_ALIGN == 2) ? MTK_RX_2B_OFFSET : 0; + const struct mtk_reg_map *reg_map = eth->soc->reg_map; int err; err = mtk_dma_init(eth); @@ -2266,16 +2327,15 @@ static int mtk_start_dma(struct mtk_eth *eth) MTK_TX_BT_32DWORDS | MTK_NDP_CO_PRO | MTK_RX_DMA_EN | MTK_RX_2B_OFFSET | MTK_RX_BT_32DWORDS, - MTK_QDMA_GLO_CFG); - + reg_map->qdma.glo_cfg); mtk_w32(eth, MTK_RX_DMA_EN | rx_2b_offset | MTK_RX_BT_32DWORDS | MTK_MULTI_EN, - MTK_PDMA_GLO_CFG); + reg_map->pdma.glo_cfg); } else { mtk_w32(eth, MTK_TX_WB_DDONE | MTK_TX_DMA_EN | MTK_RX_DMA_EN | MTK_MULTI_EN | MTK_PDMA_SIZE_8DWORDS, - MTK_PDMA_GLO_CFG); + reg_map->pdma.glo_cfg); } return 0; @@ -2398,8 +2458,8 @@ static int mtk_stop(struct net_device *dev) cancel_work_sync(ð->tx_dim.work); if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) - mtk_stop_dma(eth, MTK_QDMA_GLO_CFG); - mtk_stop_dma(eth, MTK_PDMA_GLO_CFG); + mtk_stop_dma(eth, eth->soc->reg_map->qdma.glo_cfg); + mtk_stop_dma(eth, eth->soc->reg_map->pdma.glo_cfg); mtk_dma_free(eth); @@ -2453,6 +2513,7 @@ static void mtk_dim_rx(struct work_struct *work) { struct dim *dim = container_of(work, struct dim, work); struct mtk_eth *eth = container_of(dim, struct mtk_eth, rx_dim); + const struct mtk_reg_map *reg_map = eth->soc->reg_map; struct dim_cq_moder cur_profile; u32 val, cur; @@ -2460,7 +2521,7 @@ static void mtk_dim_rx(struct work_struct *work) dim->profile_ix); spin_lock_bh(ð->dim_lock); - val = mtk_r32(eth, MTK_PDMA_DELAY_INT); + val = mtk_r32(eth, reg_map->pdma.delay_irq); val &= MTK_PDMA_DELAY_TX_MASK; val |= MTK_PDMA_DELAY_RX_EN; @@ -2470,9 +2531,9 @@ static void mtk_dim_rx(struct work_struct *work) cur = min_t(u32, cur_profile.pkts, MTK_PDMA_DELAY_PINT_MASK); val |= cur << MTK_PDMA_DELAY_RX_PINT_SHIFT; - mtk_w32(eth, val, MTK_PDMA_DELAY_INT); + mtk_w32(eth, val, reg_map->pdma.delay_irq); if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) - mtk_w32(eth, val, MTK_QDMA_DELAY_INT); + mtk_w32(eth, val, reg_map->qdma.delay_irq); spin_unlock_bh(ð->dim_lock); @@ -2483,6 +2544,7 @@ static void mtk_dim_tx(struct work_struct *work) { struct dim *dim = container_of(work, struct dim, work); struct mtk_eth *eth = container_of(dim, struct mtk_eth, tx_dim); + const struct mtk_reg_map *reg_map = eth->soc->reg_map; struct dim_cq_moder cur_profile; u32 val, cur; @@ -2490,7 +2552,7 @@ static void mtk_dim_tx(struct work_struct *work) dim->profile_ix); spin_lock_bh(ð->dim_lock); - val = mtk_r32(eth, MTK_PDMA_DELAY_INT); + val = mtk_r32(eth, reg_map->pdma.delay_irq); val &= MTK_PDMA_DELAY_RX_MASK; val |= MTK_PDMA_DELAY_TX_EN; @@ -2500,9 +2562,9 @@ static void mtk_dim_tx(struct work_struct *work) cur = min_t(u32, cur_profile.pkts, MTK_PDMA_DELAY_PINT_MASK); val |= cur << MTK_PDMA_DELAY_TX_PINT_SHIFT; - mtk_w32(eth, val, MTK_PDMA_DELAY_INT); + mtk_w32(eth, val, reg_map->pdma.delay_irq); if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) - mtk_w32(eth, val, MTK_QDMA_DELAY_INT); + mtk_w32(eth, val, reg_map->qdma.delay_irq); spin_unlock_bh(ð->dim_lock); @@ -2513,6 +2575,7 @@ static int mtk_hw_init(struct mtk_eth *eth) { u32 dma_mask = ETHSYS_DMA_AG_MAP_PDMA | ETHSYS_DMA_AG_MAP_QDMA | ETHSYS_DMA_AG_MAP_PPE; + const struct mtk_reg_map *reg_map = eth->soc->reg_map; int i, val, ret; if (test_and_set_bit(MTK_HW_INIT, ð->state)) @@ -2587,10 +2650,10 @@ static int mtk_hw_init(struct mtk_eth *eth) mtk_rx_irq_disable(eth, ~0); /* FE int grouping */ - mtk_w32(eth, MTK_TX_DONE_INT, MTK_PDMA_INT_GRP1); - mtk_w32(eth, MTK_RX_DONE_INT, MTK_PDMA_INT_GRP2); - mtk_w32(eth, MTK_TX_DONE_INT, MTK_QDMA_INT_GRP1); - mtk_w32(eth, MTK_RX_DONE_INT, MTK_QDMA_INT_GRP2); + mtk_w32(eth, MTK_TX_DONE_INT, reg_map->pdma.int_grp); + mtk_w32(eth, MTK_RX_DONE_INT, reg_map->pdma.int_grp + 4); + mtk_w32(eth, MTK_TX_DONE_INT, reg_map->qdma.int_grp); + mtk_w32(eth, MTK_RX_DONE_INT, reg_map->qdma.int_grp + 4); mtk_w32(eth, 0x21021000, MTK_FE_INT_GRP); return 0; @@ -3153,14 +3216,6 @@ static int mtk_probe(struct platform_device *pdev) if (IS_ERR(eth->base)) return PTR_ERR(eth->base); - if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) { - eth->tx_int_mask_reg = MTK_QDMA_INT_MASK; - eth->tx_int_status_reg = MTK_QDMA_INT_STATUS; - } else { - eth->tx_int_mask_reg = MTK_PDMA_INT_MASK; - eth->tx_int_status_reg = MTK_PDMA_INT_STATUS; - } - if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) { eth->rx_dma_l4_valid = RX_DMA_L4_VALID_PDMA; eth->ip_align = NET_IP_ALIGN; @@ -3394,6 +3449,7 @@ static int mtk_remove(struct platform_device *pdev) } static const struct mtk_soc_data mt2701_data = { + .reg_map = &mtk_reg_map, .caps = MT7623_CAPS | MTK_HWLRO, .hw_features = MTK_HW_FEATURES, .required_clks = MT7623_CLKS_BITMAP, @@ -3405,6 +3461,7 @@ static const struct mtk_soc_data mt2701_data = { }; static const struct mtk_soc_data mt7621_data = { + .reg_map = &mtk_reg_map, .caps = MT7621_CAPS, .hw_features = MTK_HW_FEATURES, .required_clks = MT7621_CLKS_BITMAP, @@ -3417,6 +3474,7 @@ static const struct mtk_soc_data mt7621_data = { }; static const struct mtk_soc_data mt7622_data = { + .reg_map = &mtk_reg_map, .ana_rgc3 = 0x2028, .caps = MT7622_CAPS | MTK_HWLRO, .hw_features = MTK_HW_FEATURES, @@ -3430,6 +3488,7 @@ static const struct mtk_soc_data mt7622_data = { }; static const struct mtk_soc_data mt7623_data = { + .reg_map = &mtk_reg_map, .caps = MT7623_CAPS | MTK_HWLRO, .hw_features = MTK_HW_FEATURES, .required_clks = MT7623_CLKS_BITMAP, @@ -3442,6 +3501,7 @@ static const struct mtk_soc_data mt7623_data = { }; static const struct mtk_soc_data mt7629_data = { + .reg_map = &mtk_reg_map, .ana_rgc3 = 0x128, .caps = MT7629_CAPS | MTK_HWLRO, .hw_features = MTK_HW_FEATURES, @@ -3454,6 +3514,7 @@ static const struct mtk_soc_data mt7629_data = { }; static const struct mtk_soc_data rt5350_data = { + .reg_map = &mt7628_reg_map, .caps = MT7628_CAPS, .hw_features = MTK_HW_FEATURES_MT7628, .required_clks = MT7628_CLKS_BITMAP, diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index dcbf4b5c70e0..7e29f042357f 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -48,6 +48,8 @@ #define MTK_HW_FEATURES_MT7628 (NETIF_F_SG | NETIF_F_RXCSUM) #define NEXT_DESP_IDX(X, Y) (((X) + 1) & ((Y) - 1)) +#define MTK_QRX_OFFSET 0x10 + #define MTK_MAX_RX_RING_NUM 4 #define MTK_HW_LRO_DMA_SIZE 8 @@ -100,18 +102,6 @@ /* Unicast Filter MAC Address Register - High */ #define MTK_GDMA_MAC_ADRH(x) (0x50C + (x * 0x1000)) -/* PDMA RX Base Pointer Register */ -#define MTK_PRX_BASE_PTR0 0x900 -#define MTK_PRX_BASE_PTR_CFG(x) (MTK_PRX_BASE_PTR0 + (x * 0x10)) - -/* PDMA RX Maximum Count Register */ -#define MTK_PRX_MAX_CNT0 0x904 -#define MTK_PRX_MAX_CNT_CFG(x) (MTK_PRX_MAX_CNT0 + (x * 0x10)) - -/* PDMA RX CPU Pointer Register */ -#define MTK_PRX_CRX_IDX0 0x908 -#define MTK_PRX_CRX_IDX_CFG(x) (MTK_PRX_CRX_IDX0 + (x * 0x10)) - /* PDMA HW LRO Control Registers */ #define MTK_PDMA_LRO_CTRL_DW0 0x980 #define MTK_LRO_EN BIT(0) @@ -126,18 +116,19 @@ #define MTK_ADMA_MODE BIT(15) #define MTK_LRO_MIN_RXD_SDL (MTK_HW_LRO_SDL_REMAIN_ROOM << 16) -/* PDMA Global Configuration Register */ -#define MTK_PDMA_GLO_CFG 0xa04 +#define MTK_RX_DMA_LRO_EN BIT(8) #define MTK_MULTI_EN BIT(10) #define MTK_PDMA_SIZE_8DWORDS (1 << 4) +/* PDMA Global Configuration Register */ +#define MTK_PDMA_LRO_SDL 0x3000 +#define MTK_RX_CFG_SDL_OFFSET 16 + /* PDMA Reset Index Register */ -#define MTK_PDMA_RST_IDX 0xa08 #define MTK_PST_DRX_IDX0 BIT(16) #define MTK_PST_DRX_IDX_CFG(x) (MTK_PST_DRX_IDX0 << (x)) /* PDMA Delay Interrupt Register */ -#define MTK_PDMA_DELAY_INT 0xa0c #define MTK_PDMA_DELAY_RX_MASK GENMASK(15, 0) #define MTK_PDMA_DELAY_RX_EN BIT(15) #define MTK_PDMA_DELAY_RX_PINT_SHIFT 8 @@ -151,19 +142,9 @@ #define MTK_PDMA_DELAY_PINT_MASK 0x7f #define MTK_PDMA_DELAY_PTIME_MASK 0xff -/* PDMA Interrupt Status Register */ -#define MTK_PDMA_INT_STATUS 0xa20 - -/* PDMA Interrupt Mask Register */ -#define MTK_PDMA_INT_MASK 0xa28 - /* PDMA HW LRO Alter Flow Delta Register */ #define MTK_PDMA_LRO_ALT_SCORE_DELTA 0xa4c -/* PDMA Interrupt grouping registers */ -#define MTK_PDMA_INT_GRP1 0xa50 -#define MTK_PDMA_INT_GRP2 0xa54 - /* PDMA HW LRO IP Setting Registers */ #define MTK_LRO_RX_RING0_DIP_DW0 0xb04 #define MTK_LRO_DIP_DW0_CFG(x) (MTK_LRO_RX_RING0_DIP_DW0 + (x * 0x40)) @@ -185,26 +166,9 @@ #define MTK_RING_MAX_AGG_CNT_H ((MTK_HW_LRO_MAX_AGG_CNT >> 6) & 0x3) /* QDMA TX Queue Configuration Registers */ -#define MTK_QTX_CFG(x) (0x1800 + (x * 0x10)) #define QDMA_RES_THRES 4 -/* QDMA TX Queue Scheduler Registers */ -#define MTK_QTX_SCH(x) (0x1804 + (x * 0x10)) - -/* QDMA RX Base Pointer Register */ -#define MTK_QRX_BASE_PTR0 0x1900 - -/* QDMA RX Maximum Count Register */ -#define MTK_QRX_MAX_CNT0 0x1904 - -/* QDMA RX CPU Pointer Register */ -#define MTK_QRX_CRX_IDX0 0x1908 - -/* QDMA RX DMA Pointer Register */ -#define MTK_QRX_DRX_IDX0 0x190C - /* QDMA Global Configuration Register */ -#define MTK_QDMA_GLO_CFG 0x1A04 #define MTK_RX_2B_OFFSET BIT(31) #define MTK_RX_BT_32DWORDS (3 << 11) #define MTK_NDP_CO_PRO BIT(10) @@ -216,20 +180,12 @@ #define MTK_TX_DMA_EN BIT(0) #define MTK_DMA_BUSY_TIMEOUT_US 1000000 -/* QDMA Reset Index Register */ -#define MTK_QDMA_RST_IDX 0x1A08 - -/* QDMA Delay Interrupt Register */ -#define MTK_QDMA_DELAY_INT 0x1A0C - /* QDMA Flow Control Register */ -#define MTK_QDMA_FC_THRES 0x1A10 #define FC_THRES_DROP_MODE BIT(20) #define FC_THRES_DROP_EN (7 << 16) #define FC_THRES_MIN 0x4444 /* QDMA Interrupt Status Register */ -#define MTK_QDMA_INT_STATUS 0x1A18 #define MTK_RX_DONE_DLY BIT(30) #define MTK_TX_DONE_DLY BIT(28) #define MTK_RX_DONE_INT3 BIT(19) @@ -244,55 +200,8 @@ #define MTK_TX_DONE_INT MTK_TX_DONE_DLY /* QDMA Interrupt grouping registers */ -#define MTK_QDMA_INT_GRP1 0x1a20 -#define MTK_QDMA_INT_GRP2 0x1a24 #define MTK_RLS_DONE_INT BIT(0) -/* QDMA Interrupt Status Register */ -#define MTK_QDMA_INT_MASK 0x1A1C - -/* QDMA Interrupt Mask Register */ -#define MTK_QDMA_HRED2 0x1A44 - -/* QDMA TX Forward CPU Pointer Register */ -#define MTK_QTX_CTX_PTR 0x1B00 - -/* QDMA TX Forward DMA Pointer Register */ -#define MTK_QTX_DTX_PTR 0x1B04 - -/* QDMA TX Release CPU Pointer Register */ -#define MTK_QTX_CRX_PTR 0x1B10 - -/* QDMA TX Release DMA Pointer Register */ -#define MTK_QTX_DRX_PTR 0x1B14 - -/* QDMA FQ Head Pointer Register */ -#define MTK_QDMA_FQ_HEAD 0x1B20 - -/* QDMA FQ Head Pointer Register */ -#define MTK_QDMA_FQ_TAIL 0x1B24 - -/* QDMA FQ Free Page Counter Register */ -#define MTK_QDMA_FQ_CNT 0x1B28 - -/* QDMA FQ Free Page Buffer Length Register */ -#define MTK_QDMA_FQ_BLEN 0x1B2C - -/* GMA1 counter / statics register */ -#define MTK_GDM1_RX_GBCNT_L 0x2400 -#define MTK_GDM1_RX_GBCNT_H 0x2404 -#define MTK_GDM1_RX_GPCNT 0x2408 -#define MTK_GDM1_RX_OERCNT 0x2410 -#define MTK_GDM1_RX_FERCNT 0x2414 -#define MTK_GDM1_RX_SERCNT 0x2418 -#define MTK_GDM1_RX_LENCNT 0x241c -#define MTK_GDM1_RX_CERCNT 0x2420 -#define MTK_GDM1_RX_FCCNT 0x2424 -#define MTK_GDM1_TX_SKIPCNT 0x2428 -#define MTK_GDM1_TX_COLCNT 0x242c -#define MTK_GDM1_TX_GBCNT_L 0x2430 -#define MTK_GDM1_TX_GBCNT_H 0x2434 -#define MTK_GDM1_TX_GPCNT 0x2438 #define MTK_STAT_OFFSET 0x40 #define MTK_WDMA0_BASE 0x2800 @@ -857,8 +766,46 @@ struct mtk_tx_dma_desc_info { u8 last:1; }; +struct mtk_reg_map { + u32 tx_irq_mask; + u32 tx_irq_status; + struct { + u32 rx_ptr; /* rx base pointer */ + u32 rx_cnt_cfg; /* rx max count configuration */ + u32 pcrx_ptr; /* rx cpu pointer */ + u32 glo_cfg; /* global configuration */ + u32 rst_idx; /* reset index */ + u32 delay_irq; /* delay interrupt */ + u32 irq_status; /* interrupt status */ + u32 irq_mask; /* interrupt mask */ + u32 int_grp; + } pdma; + struct { + u32 qtx_cfg; /* tx queue configuration */ + u32 rx_ptr; /* rx base pointer */ + u32 rx_cnt_cfg; /* rx max count configuration */ + u32 qcrx_ptr; /* rx cpu pointer */ + u32 glo_cfg; /* global configuration */ + u32 rst_idx; /* reset index */ + u32 delay_irq; /* delay interrupt */ + u32 fc_th; /* flow control */ + u32 int_grp; + u32 hred; /* interrupt mask */ + u32 ctx_ptr; /* tx acquire cpu pointer */ + u32 dtx_ptr; /* tx acquire dma pointer */ + u32 crx_ptr; /* tx release cpu pointer */ + u32 drx_ptr; /* tx release dma pointer */ + u32 fq_head; /* fq head pointer */ + u32 fq_tail; /* fq tail pointer */ + u32 fq_count; /* fq free page count */ + u32 fq_blen; /* fq free page buffer length */ + } qdma; + u32 gdm1_cnt; +}; + /* struct mtk_eth_data - This is the structure holding all differences * among various plaforms + * @reg_map Soc register map. * @ana_rgc3: The offset for register ANA_RGC3 related to * sgmiisys syscon * @caps Flags shown the extra capability for the SoC @@ -871,6 +818,7 @@ struct mtk_tx_dma_desc_info { * @rxd_size Rx DMA descriptor size. */ struct mtk_soc_data { + const struct mtk_reg_map *reg_map; u32 ana_rgc3; u32 caps; u32 required_clks; @@ -999,8 +947,6 @@ struct mtk_eth { u32 tx_bytes; struct dim tx_dim; - u32 tx_int_mask_reg; - u32 tx_int_status_reg; u32 rx_dma_l4_valid; int ip_align; From patchwork Fri May 20 18:11:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 575141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BA97C4332F for ; Fri, 20 May 2022 18:13:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352464AbiETSNJ (ORCPT ); Fri, 20 May 2022 14:13:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352453AbiETSNG (ORCPT ); Fri, 20 May 2022 14:13:06 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C547D18FF12; Fri, 20 May 2022 11:13:03 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 40731B82D90; Fri, 20 May 2022 18:13:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 320DFC36AE5; Fri, 20 May 2022 18:12:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070381; bh=VsszdDIEVXOQBccl/ciktK3N2ivWYe8RrmDAxZCFLaI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OGuZfXibEvvW0tJmEqU+8E82bBWn0lmVbcp7iG1eWeISPV6+xREY4st2VjmHcQ/in nK1QAcsalQIcQjJrzVZ7v3YIhDd+3JzCDbFdJRgOmi3kP/Ck8Wori/bG5L0KLZh5oq pElZCris8UZLO9IR3JuNLtJr9XJQM9EoXZ/PJhR5or69TfXLseUKx5+xmbdbI/xoA8 1vL20LxP5FmkceUIJsBbDnnfY4wUMUjIbXqlZEz6LOjNFvTcYXMvrGJFANYpfiYmS/ zUQOsnh0VNEKVdyy9neh/0FZwSiXzp7gQngGfMYxnxoM0ciRR/QUZVXaXJHnQ/d+la o4BO6HPc7Be3g== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 13/16] net: ethernet: mtk_eth_soc: introduce MTK_NETSYS_V2 support Date: Fri, 20 May 2022 20:11:36 +0200 Message-Id: X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Introduce MTK_NETSYS_V2 support. MTK_NETSYS_V2 defines 32B TX/RX DMA descriptors. This is a preliminary patch to add mt7986 ethernet support. Tested-by: Sam Shih Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 321 ++++++++++++++++---- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 126 +++++++- 2 files changed, 372 insertions(+), 75 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 503829a9c270..9169f6360ab5 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -844,8 +844,8 @@ static inline int mtk_max_buf_size(int frag_size) return buf_size; } -static inline bool mtk_rx_get_desc(struct mtk_rx_dma *rxd, - struct mtk_rx_dma *dma_rxd) +static bool mtk_rx_get_desc(struct mtk_eth *eth, struct mtk_rx_dma_v2 *rxd, + struct mtk_rx_dma_v2 *dma_rxd) { rxd->rxd2 = READ_ONCE(dma_rxd->rxd2); if (!(rxd->rxd2 & RX_DMA_DONE)) @@ -854,6 +854,10 @@ static inline bool mtk_rx_get_desc(struct mtk_rx_dma *rxd, rxd->rxd1 = READ_ONCE(dma_rxd->rxd1); rxd->rxd3 = READ_ONCE(dma_rxd->rxd3); rxd->rxd4 = READ_ONCE(dma_rxd->rxd4); + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + rxd->rxd5 = READ_ONCE(dma_rxd->rxd5); + rxd->rxd6 = READ_ONCE(dma_rxd->rxd6); + } return true; } @@ -887,7 +891,7 @@ static int mtk_init_fq_dma(struct mtk_eth *eth) phy_ring_tail = eth->phy_scratch_ring + soc->txrx.txd_size * (cnt - 1); for (i = 0; i < cnt; i++) { - struct mtk_tx_dma *txd; + struct mtk_tx_dma_v2 *txd; txd = (void *)eth->scratch_ring + i * soc->txrx.txd_size; txd->txd1 = dma_addr + i * MTK_QDMA_PAGE_SIZE; @@ -897,6 +901,12 @@ static int mtk_init_fq_dma(struct mtk_eth *eth) txd->txd3 = TX_DMA_PLEN0(MTK_QDMA_PAGE_SIZE); txd->txd4 = 0; + if (MTK_HAS_CAPS(soc->caps, MTK_NETSYS_V2)) { + txd->txd5 = 0; + txd->txd6 = 0; + txd->txd7 = 0; + txd->txd8 = 0; + } } mtk_w32(eth, eth->phy_scratch_ring, soc->reg_map->qdma.fq_head); @@ -1000,10 +1010,12 @@ static void setup_tx_buf(struct mtk_eth *eth, struct mtk_tx_buf *tx_buf, } } -static void mtk_tx_set_dma_desc(struct net_device *dev, struct mtk_tx_dma *desc, - struct mtk_tx_dma_desc_info *info) +static void mtk_tx_set_dma_desc_v1(struct net_device *dev, void *txd, + struct mtk_tx_dma_desc_info *info) { struct mtk_mac *mac = netdev_priv(dev); + struct mtk_eth *eth = mac->hw; + struct mtk_tx_dma *desc = txd; u32 data; WRITE_ONCE(desc->txd1, info->addr); @@ -1027,6 +1039,59 @@ static void mtk_tx_set_dma_desc(struct net_device *dev, struct mtk_tx_dma *desc, WRITE_ONCE(desc->txd4, data); } +static void mtk_tx_set_dma_desc_v2(struct net_device *dev, void *txd, + struct mtk_tx_dma_desc_info *info) +{ + struct mtk_mac *mac = netdev_priv(dev); + struct mtk_tx_dma_v2 *desc = txd; + struct mtk_eth *eth = mac->hw; + u32 data; + + WRITE_ONCE(desc->txd1, info->addr); + + data = TX_DMA_PLEN0(info->size); + if (info->last) + data |= TX_DMA_LS0; + WRITE_ONCE(desc->txd3, data); + + if (!info->qid && mac->id) + info->qid = MTK_QDMA_GMAC2_QID; + + data = (mac->id + 1) << TX_DMA_FPORT_SHIFT_V2; /* forward port */ + data |= TX_DMA_SWC_V2 | QID_BITS_V2(info->qid); + WRITE_ONCE(desc->txd4, data); + + data = 0; + if (info->first) { + if (info->gso) + data |= TX_DMA_TSO_V2; + /* tx checksum offload */ + if (info->csum) + data |= TX_DMA_CHKSUM_V2; + } + WRITE_ONCE(desc->txd5, data); + + data = 0; + if (info->first && info->vlan) + data |= TX_DMA_INS_VLAN_V2 | info->vlan_tci; + WRITE_ONCE(desc->txd6, data); + + WRITE_ONCE(desc->txd7, 0); + WRITE_ONCE(desc->txd8, 0); +} + +static void mtk_tx_set_dma_desc(struct net_device *dev, void *txd, + struct mtk_tx_dma_desc_info *info) +{ + struct mtk_mac *mac = netdev_priv(dev); + struct mtk_eth *eth = mac->hw; + + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) + mtk_tx_set_dma_desc_v2(dev, txd, info); + else + mtk_tx_set_dma_desc_v1(dev, txd, info); +} + static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, int tx_num, struct mtk_tx_ring *ring, bool gso) { @@ -1035,6 +1100,7 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, .gso = gso, .csum = skb->ip_summed == CHECKSUM_PARTIAL, .vlan = skb_vlan_tag_present(skb), + .qid = skb->mark & MTK_QDMA_TX_MASK, .vlan_tci = skb_vlan_tag_get(skb), .first = true, .last = !skb_is_nonlinear(skb), @@ -1094,7 +1160,9 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, } memset(&txd_info, 0, sizeof(struct mtk_tx_dma_desc_info)); - txd_info.size = min(frag_size, MTK_TX_DMA_BUF_LEN); + txd_info.size = min_t(unsigned int, frag_size, + soc->txrx.dma_max_len); + txd_info.qid = skb->mark & MTK_QDMA_TX_MASK; txd_info.last = i == skb_shinfo(skb)->nr_frags - 1 && !(frag_size - txd_info.size); txd_info.addr = skb_frag_dma_map(eth->dma_dev, frag, @@ -1175,17 +1243,16 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev, return -ENOMEM; } -static inline int mtk_cal_txd_req(struct sk_buff *skb) +static int mtk_cal_txd_req(struct mtk_eth *eth, struct sk_buff *skb) { - int i, nfrags; + int i, nfrags = 1; skb_frag_t *frag; - nfrags = 1; if (skb_is_gso(skb)) { for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { frag = &skb_shinfo(skb)->frags[i]; nfrags += DIV_ROUND_UP(skb_frag_size(frag), - MTK_TX_DMA_BUF_LEN); + eth->soc->txrx.dma_max_len); } } else { nfrags += skb_shinfo(skb)->nr_frags; @@ -1237,7 +1304,7 @@ static netdev_tx_t mtk_start_xmit(struct sk_buff *skb, struct net_device *dev) if (unlikely(test_bit(MTK_RESETTING, ð->state))) goto drop; - tx_num = mtk_cal_txd_req(skb); + tx_num = mtk_cal_txd_req(eth, skb); if (unlikely(atomic_read(&ring->free_count) <= tx_num)) { netif_stop_queue(dev); netif_err(eth, tx_queued, dev, @@ -1329,7 +1396,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, int idx; struct sk_buff *skb; u8 *data, *new_data; - struct mtk_rx_dma *rxd, trxd; + struct mtk_rx_dma_v2 *rxd, trxd; int done = 0, bytes = 0; while (done < budget) { @@ -1337,7 +1404,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, unsigned int pktlen; dma_addr_t dma_addr; u32 hash, reason; - int mac; + int mac = 0; ring = mtk_get_rx_ring(eth); if (unlikely(!ring)) @@ -1347,16 +1414,15 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, rxd = (void *)ring->dma + idx * eth->soc->txrx.rxd_size; data = ring->data[idx]; - if (!mtk_rx_get_desc(&trxd, rxd)) + if (!mtk_rx_get_desc(eth, &trxd, rxd)) break; /* find out which mac the packet come from. values start at 1 */ - if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628) || - (trxd.rxd4 & RX_DMA_SPECIAL_TAG)) - mac = 0; - else - mac = ((trxd.rxd4 >> RX_DMA_FPORT_SHIFT) & - RX_DMA_FPORT_MASK) - 1; + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) + mac = RX_DMA_GET_SPORT_V2(trxd.rxd5) - 1; + else if (!MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628) && + !(trxd.rxd4 & RX_DMA_SPECIAL_TAG)) + mac = RX_DMA_GET_SPORT(trxd.rxd4) - 1; if (unlikely(mac < 0 || mac >= MTK_MAC_COUNT || !eth->netdev[mac])) @@ -1399,7 +1465,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, pktlen = RX_DMA_GET_PLEN0(trxd.rxd2); skb->dev = netdev; skb_put(skb, pktlen); - if (trxd.rxd4 & eth->rx_dma_l4_valid) + if (trxd.rxd4 & eth->soc->txrx.rx_dma_l4_valid) skb->ip_summed = CHECKSUM_UNNECESSARY; else skb_checksum_none_assert(skb); @@ -1417,10 +1483,25 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, mtk_ppe_check_skb(eth->ppe, skb, trxd.rxd4 & MTK_RXD4_FOE_ENTRY); - if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX && - (trxd.rxd2 & RX_DMA_VTAG)) - __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), - RX_DMA_VID(trxd.rxd3)); + if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) { + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + if (trxd.rxd3 & RX_DMA_VTAG_V2) + __vlan_hwaccel_put_tag(skb, + htons(RX_DMA_VPID(trxd.rxd4)), + RX_DMA_VID(trxd.rxd4)); + } else if (trxd.rxd2 & RX_DMA_VTAG) { + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), + RX_DMA_VID(trxd.rxd3)); + } + + /* If the device is attached to a dsa switch, the special + * tag inserted in VLAN field by hw switch can * be offloaded + * by RX HW VLAN offload. Clear vlan info. + */ + if (netdev_uses_dsa(netdev)) + __vlan_hwaccel_clear_tag(skb); + } + skb_record_rx_queue(skb, 0); napi_gro_receive(napi, skb); @@ -1432,7 +1513,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) rxd->rxd2 = RX_DMA_LSO; else - rxd->rxd2 = RX_DMA_PLEN0(ring->buf_size); + rxd->rxd2 = RX_DMA_PREP_PLEN0(ring->buf_size); ring->calc_idx = idx; @@ -1634,7 +1715,8 @@ static int mtk_napi_rx(struct napi_struct *napi, int budget) do { int rx_done; - mtk_w32(eth, MTK_RX_DONE_INT, reg_map->pdma.irq_status); + mtk_w32(eth, eth->soc->txrx.rx_irq_done_mask, + reg_map->pdma.irq_status); rx_done = mtk_poll_rx(napi, budget - rx_done_total, eth); rx_done_total += rx_done; @@ -1648,10 +1730,11 @@ static int mtk_napi_rx(struct napi_struct *napi, int budget) if (rx_done_total == budget) return budget; - } while (mtk_r32(eth, reg_map->pdma.irq_status) & MTK_RX_DONE_INT); + } while (mtk_r32(eth, reg_map->pdma.irq_status) & + eth->soc->txrx.rx_irq_done_mask); if (napi_complete_done(napi, rx_done_total)) - mtk_rx_irq_enable(eth, MTK_RX_DONE_INT); + mtk_rx_irq_enable(eth, eth->soc->txrx.rx_irq_done_mask); return rx_done_total; } @@ -1661,7 +1744,7 @@ static int mtk_tx_alloc(struct mtk_eth *eth) const struct mtk_soc_data *soc = eth->soc; struct mtk_tx_ring *ring = ð->tx_ring; int i, sz = soc->txrx.txd_size; - struct mtk_tx_dma *txd; + struct mtk_tx_dma_v2 *txd; ring->buf = kcalloc(MTK_DMA_SIZE, sizeof(*ring->buf), GFP_KERNEL); @@ -1681,13 +1764,19 @@ static int mtk_tx_alloc(struct mtk_eth *eth) txd->txd2 = next_ptr; txd->txd3 = TX_DMA_LS0 | TX_DMA_OWNER_CPU; txd->txd4 = 0; + if (MTK_HAS_CAPS(soc->caps, MTK_NETSYS_V2)) { + txd->txd5 = 0; + txd->txd6 = 0; + txd->txd7 = 0; + txd->txd8 = 0; + } } /* On MT7688 (PDMA only) this driver uses the ring->dma structs * only as the framework. The real HW descriptors are the PDMA * descriptors in ring->dma_pdma. */ - if (!MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) { + if (!MTK_HAS_CAPS(soc->caps, MTK_QDMA)) { ring->dma_pdma = dma_alloc_coherent(eth->dma_dev, MTK_DMA_SIZE * sz, &ring->phys_pdma, GFP_KERNEL); if (!ring->dma_pdma) @@ -1767,13 +1856,11 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) struct mtk_rx_ring *ring; int rx_data_len, rx_dma_size; int i; - u32 offset = 0; if (rx_flag == MTK_RX_FLAGS_QDMA) { if (ring_no) return -EINVAL; ring = ð->rx_ring_qdma; - offset = 0x1000; } else { ring = ð->rx_ring[ring_no]; } @@ -1806,7 +1893,7 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) return -ENOMEM; for (i = 0; i < rx_dma_size; i++) { - struct mtk_rx_dma *rxd; + struct mtk_rx_dma_v2 *rxd; dma_addr_t dma_addr = dma_map_single(eth->dma_dev, ring->data[i] + NET_SKB_PAD + eth->ip_align, @@ -1821,26 +1908,47 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) rxd->rxd2 = RX_DMA_LSO; else - rxd->rxd2 = RX_DMA_PLEN0(ring->buf_size); + rxd->rxd2 = RX_DMA_PREP_PLEN0(ring->buf_size); rxd->rxd3 = 0; rxd->rxd4 = 0; + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + rxd->rxd5 = 0; + rxd->rxd6 = 0; + rxd->rxd7 = 0; + rxd->rxd8 = 0; + } } ring->dma_size = rx_dma_size; ring->calc_idx_update = false; ring->calc_idx = rx_dma_size - 1; - ring->crx_idx_reg = reg_map->pdma.pcrx_ptr + ring_no * MTK_QRX_OFFSET; + if (rx_flag == MTK_RX_FLAGS_QDMA) + ring->crx_idx_reg = reg_map->qdma.qcrx_ptr + + ring_no * MTK_QRX_OFFSET; + else + ring->crx_idx_reg = reg_map->pdma.pcrx_ptr + + ring_no * MTK_QRX_OFFSET; /* make sure that all changes to the dma ring are flushed before we * continue */ wmb(); - mtk_w32(eth, ring->phys, - reg_map->pdma.rx_ptr + ring_no * MTK_QRX_OFFSET + offset); - mtk_w32(eth, rx_dma_size, - reg_map->pdma.rx_cnt_cfg + ring_no * MTK_QRX_OFFSET + offset); - mtk_w32(eth, ring->calc_idx, ring->crx_idx_reg + offset); - mtk_w32(eth, MTK_PST_DRX_IDX_CFG(ring_no), reg_map->pdma.rst_idx + offset); + if (rx_flag == MTK_RX_FLAGS_QDMA) { + mtk_w32(eth, ring->phys, + reg_map->qdma.rx_ptr + ring_no * MTK_QRX_OFFSET); + mtk_w32(eth, rx_dma_size, + reg_map->qdma.rx_cnt_cfg + ring_no * MTK_QRX_OFFSET); + mtk_w32(eth, MTK_PST_DRX_IDX_CFG(ring_no), + reg_map->qdma.rst_idx); + } else { + mtk_w32(eth, ring->phys, + reg_map->pdma.rx_ptr + ring_no * MTK_QRX_OFFSET); + mtk_w32(eth, rx_dma_size, + reg_map->pdma.rx_cnt_cfg + ring_no * MTK_QRX_OFFSET); + mtk_w32(eth, MTK_PST_DRX_IDX_CFG(ring_no), + reg_map->pdma.rst_idx); + } + mtk_w32(eth, ring->calc_idx, ring->crx_idx_reg); return 0; } @@ -2259,7 +2367,7 @@ static irqreturn_t mtk_handle_irq_rx(int irq, void *_eth) eth->rx_events++; if (likely(napi_schedule_prep(ð->rx_napi))) { __napi_schedule(ð->rx_napi); - mtk_rx_irq_disable(eth, MTK_RX_DONE_INT); + mtk_rx_irq_disable(eth, eth->soc->txrx.rx_irq_done_mask); } return IRQ_HANDLED; @@ -2283,8 +2391,10 @@ static irqreturn_t mtk_handle_irq(int irq, void *_eth) struct mtk_eth *eth = _eth; const struct mtk_reg_map *reg_map = eth->soc->reg_map; - if (mtk_r32(eth, reg_map->pdma.irq_mask) & MTK_RX_DONE_INT) { - if (mtk_r32(eth, reg_map->pdma.irq_status) & MTK_RX_DONE_INT) + if (mtk_r32(eth, reg_map->pdma.irq_mask) & + eth->soc->txrx.rx_irq_done_mask) { + if (mtk_r32(eth, reg_map->pdma.irq_status) & + eth->soc->txrx.rx_irq_done_mask) mtk_handle_irq_rx(irq, _eth); } if (mtk_r32(eth, reg_map->tx_irq_mask) & MTK_TX_DONE_INT) { @@ -2302,16 +2412,16 @@ static void mtk_poll_controller(struct net_device *dev) struct mtk_eth *eth = mac->hw; mtk_tx_irq_disable(eth, MTK_TX_DONE_INT); - mtk_rx_irq_disable(eth, MTK_RX_DONE_INT); + mtk_rx_irq_disable(eth, eth->soc->txrx.rx_irq_done_mask); mtk_handle_irq_rx(eth->irq[2], dev); mtk_tx_irq_enable(eth, MTK_TX_DONE_INT); - mtk_rx_irq_enable(eth, MTK_RX_DONE_INT); + mtk_rx_irq_enable(eth, eth->soc->txrx.rx_irq_done_mask); } #endif static int mtk_start_dma(struct mtk_eth *eth) { - u32 rx_2b_offset = (NET_IP_ALIGN == 2) ? MTK_RX_2B_OFFSET : 0; + u32 val, rx_2b_offset = (NET_IP_ALIGN == 2) ? MTK_RX_2B_OFFSET : 0; const struct mtk_reg_map *reg_map = eth->soc->reg_map; int err; @@ -2322,12 +2432,19 @@ static int mtk_start_dma(struct mtk_eth *eth) } if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) { - mtk_w32(eth, - MTK_TX_WB_DDONE | MTK_TX_DMA_EN | - MTK_TX_BT_32DWORDS | MTK_NDP_CO_PRO | - MTK_RX_DMA_EN | MTK_RX_2B_OFFSET | - MTK_RX_BT_32DWORDS, - reg_map->qdma.glo_cfg); + val = mtk_r32(eth, reg_map->qdma.glo_cfg); + val |= MTK_TX_DMA_EN | MTK_RX_DMA_EN | + MTK_TX_BT_32DWORDS | MTK_NDP_CO_PRO | + MTK_RX_2B_OFFSET | MTK_TX_WB_DDONE; + + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) + val |= MTK_MUTLI_CNT | MTK_RESV_BUF | + MTK_WCOMP_EN | MTK_DMAD_WR_WDONE | + MTK_CHK_DDONE_EN; + else + val |= MTK_RX_BT_32DWORDS; + mtk_w32(eth, val, reg_map->qdma.glo_cfg); + mtk_w32(eth, MTK_RX_DMA_EN | rx_2b_offset | MTK_RX_BT_32DWORDS | MTK_MULTI_EN, @@ -2398,7 +2515,7 @@ static int mtk_open(struct net_device *dev) napi_enable(ð->tx_napi); napi_enable(ð->rx_napi); mtk_tx_irq_enable(eth, MTK_TX_DONE_INT); - mtk_rx_irq_enable(eth, MTK_RX_DONE_INT); + mtk_rx_irq_enable(eth, eth->soc->txrx.rx_irq_done_mask); refcount_set(ð->dma_refcnt, 1); } else @@ -2450,7 +2567,7 @@ static int mtk_stop(struct net_device *dev) mtk_gdm_config(eth, MTK_GDMA_DROP_ALL); mtk_tx_irq_disable(eth, MTK_TX_DONE_INT); - mtk_rx_irq_disable(eth, MTK_RX_DONE_INT); + mtk_rx_irq_disable(eth, eth->soc->txrx.rx_irq_done_mask); napi_disable(ð->tx_napi); napi_disable(ð->rx_napi); @@ -2610,9 +2727,25 @@ static int mtk_hw_init(struct mtk_eth *eth) return 0; } - /* Non-MT7628 handling... */ - ethsys_reset(eth, RSTCTRL_FE); - ethsys_reset(eth, RSTCTRL_PPE); + val = RSTCTRL_FE | RSTCTRL_PPE; + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + regmap_write(eth->ethsys, ETHSYS_FE_RST_CHK_IDLE_EN, 0); + + val |= RSTCTRL_ETH; + if (MTK_HAS_CAPS(eth->soc->caps, MTK_RSTCTRL_PPE1)) + val |= RSTCTRL_PPE1; + } + + ethsys_reset(eth, val); + + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + regmap_write(eth->ethsys, ETHSYS_FE_RST_CHK_IDLE_EN, + 0x3ffffff); + + /* Set FE to PDMAv2 if necessary */ + val = mtk_r32(eth, MTK_FE_GLO_MISC); + mtk_w32(eth, val | BIT(4), MTK_FE_GLO_MISC); + } if (eth->pctl) { /* Set GE2 driving and slew rate */ @@ -2651,11 +2784,47 @@ static int mtk_hw_init(struct mtk_eth *eth) /* FE int grouping */ mtk_w32(eth, MTK_TX_DONE_INT, reg_map->pdma.int_grp); - mtk_w32(eth, MTK_RX_DONE_INT, reg_map->pdma.int_grp + 4); + mtk_w32(eth, eth->soc->txrx.rx_irq_done_mask, reg_map->pdma.int_grp + 4); mtk_w32(eth, MTK_TX_DONE_INT, reg_map->qdma.int_grp); - mtk_w32(eth, MTK_RX_DONE_INT, reg_map->qdma.int_grp + 4); + mtk_w32(eth, eth->soc->txrx.rx_irq_done_mask, reg_map->qdma.int_grp + 4); mtk_w32(eth, 0x21021000, MTK_FE_INT_GRP); + if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { + /* PSE should not drop port8 and port9 packets */ + mtk_w32(eth, 0x00000300, PSE_DROP_CFG); + + /* PSE Free Queue Flow Control */ + mtk_w32(eth, 0x01fa01f4, PSE_FQFC_CFG2); + + /* PSE config input queue threshold */ + mtk_w32(eth, 0x001a000e, PSE_IQ_REV(1)); + mtk_w32(eth, 0x01ff001a, PSE_IQ_REV(2)); + mtk_w32(eth, 0x000e01ff, PSE_IQ_REV(3)); + mtk_w32(eth, 0x000e000e, PSE_IQ_REV(4)); + mtk_w32(eth, 0x000e000e, PSE_IQ_REV(5)); + mtk_w32(eth, 0x000e000e, PSE_IQ_REV(6)); + mtk_w32(eth, 0x000e000e, PSE_IQ_REV(7)); + mtk_w32(eth, 0x000e000e, PSE_IQ_REV(8)); + + /* PSE config output queue threshold */ + mtk_w32(eth, 0x000f000a, PSE_OQ_TH(1)); + mtk_w32(eth, 0x001a000f, PSE_OQ_TH(2)); + mtk_w32(eth, 0x000f001a, PSE_OQ_TH(3)); + mtk_w32(eth, 0x01ff000f, PSE_OQ_TH(4)); + mtk_w32(eth, 0x000f000f, PSE_OQ_TH(5)); + mtk_w32(eth, 0x0006000f, PSE_OQ_TH(6)); + mtk_w32(eth, 0x00060006, PSE_OQ_TH(7)); + mtk_w32(eth, 0x00060006, PSE_OQ_TH(8)); + + /* GDM and CDM Threshold */ + mtk_w32(eth, 0x00000004, MTK_GDM2_THRES); + mtk_w32(eth, 0x00000004, MTK_CDMW0_THRES); + mtk_w32(eth, 0x00000004, MTK_CDMW1_THRES); + mtk_w32(eth, 0x00000004, MTK_CDME0_THRES); + mtk_w32(eth, 0x00000004, MTK_CDME1_THRES); + mtk_w32(eth, 0x00000004, MTK_CDMM_THRES); + } + return 0; err_disable_pm: @@ -3216,12 +3385,8 @@ static int mtk_probe(struct platform_device *pdev) if (IS_ERR(eth->base)) return PTR_ERR(eth->base); - if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) { - eth->rx_dma_l4_valid = RX_DMA_L4_VALID_PDMA; + if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) eth->ip_align = NET_IP_ALIGN; - } else { - eth->rx_dma_l4_valid = RX_DMA_L4_VALID; - } spin_lock_init(ð->page_lock); spin_lock_init(ð->tx_irq_lock); @@ -3457,6 +3622,10 @@ static const struct mtk_soc_data mt2701_data = { .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), + .rx_irq_done_mask = MTK_RX_DONE_INT, + .rx_dma_l4_valid = RX_DMA_L4_VALID, + .dma_max_len = MTK_TX_DMA_BUF_LEN, + .dma_len_offset = 16, }, }; @@ -3470,6 +3639,10 @@ static const struct mtk_soc_data mt7621_data = { .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), + .rx_irq_done_mask = MTK_RX_DONE_INT, + .rx_dma_l4_valid = RX_DMA_L4_VALID, + .dma_max_len = MTK_TX_DMA_BUF_LEN, + .dma_len_offset = 16, }, }; @@ -3484,6 +3657,10 @@ static const struct mtk_soc_data mt7622_data = { .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), + .rx_irq_done_mask = MTK_RX_DONE_INT, + .rx_dma_l4_valid = RX_DMA_L4_VALID, + .dma_max_len = MTK_TX_DMA_BUF_LEN, + .dma_len_offset = 16, }, }; @@ -3497,6 +3674,10 @@ static const struct mtk_soc_data mt7623_data = { .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), + .rx_irq_done_mask = MTK_RX_DONE_INT, + .rx_dma_l4_valid = RX_DMA_L4_VALID, + .dma_max_len = MTK_TX_DMA_BUF_LEN, + .dma_len_offset = 16, }, }; @@ -3510,6 +3691,10 @@ static const struct mtk_soc_data mt7629_data = { .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), + .rx_irq_done_mask = MTK_RX_DONE_INT, + .rx_dma_l4_valid = RX_DMA_L4_VALID, + .dma_max_len = MTK_TX_DMA_BUF_LEN, + .dma_len_offset = 16, }, }; @@ -3522,6 +3707,10 @@ static const struct mtk_soc_data rt5350_data = { .txrx = { .txd_size = sizeof(struct mtk_tx_dma), .rxd_size = sizeof(struct mtk_rx_dma), + .rx_irq_done_mask = MTK_RX_DONE_INT, + .rx_dma_l4_valid = RX_DMA_L4_VALID_PDMA, + .dma_max_len = MTK_TX_DMA_BUF_LEN, + .dma_len_offset = 16, }, }; diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 7e29f042357f..1db5424edee5 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -24,6 +24,7 @@ #define MTK_MAX_RX_LENGTH 1536 #define MTK_MAX_RX_LENGTH_2K 2048 #define MTK_TX_DMA_BUF_LEN 0x3fff +#define MTK_TX_DMA_BUF_LEN_V2 0xffff #define MTK_DMA_SIZE 512 #define MTK_MAC_COUNT 2 #define MTK_RX_ETH_HLEN (ETH_HLEN + ETH_FCS_LEN) @@ -83,6 +84,10 @@ #define MTK_CDMQ_IG_CTRL 0x1400 #define MTK_CDMQ_STAG_EN BIT(0) +/* CDMP Ingress Control Register */ +#define MTK_CDMP_IG_CTRL 0x400 +#define MTK_CDMP_STAG_EN BIT(0) + /* CDMP Exgress Control Register */ #define MTK_CDMP_EG_CTRL 0x404 @@ -102,13 +107,38 @@ /* Unicast Filter MAC Address Register - High */ #define MTK_GDMA_MAC_ADRH(x) (0x50C + (x * 0x1000)) +/* FE global misc reg*/ +#define MTK_FE_GLO_MISC 0x124 + +/* PSE Free Queue Flow Control */ +#define PSE_FQFC_CFG1 0x100 +#define PSE_FQFC_CFG2 0x104 +#define PSE_DROP_CFG 0x108 + +/* PSE Input Queue Reservation Register*/ +#define PSE_IQ_REV(x) (0x140 + (((x) - 1) << 2)) + +/* PSE Output Queue Threshold Register*/ +#define PSE_OQ_TH(x) (0x160 + (((x) - 1) << 2)) + +/* GDM and CDM Threshold */ +#define MTK_GDM2_THRES 0x1530 +#define MTK_CDMW0_THRES 0x164c +#define MTK_CDMW1_THRES 0x1650 +#define MTK_CDME0_THRES 0x1654 +#define MTK_CDME1_THRES 0x1658 +#define MTK_CDMM_THRES 0x165c + /* PDMA HW LRO Control Registers */ #define MTK_PDMA_LRO_CTRL_DW0 0x980 #define MTK_LRO_EN BIT(0) #define MTK_L3_CKS_UPD_EN BIT(7) +#define MTK_L3_CKS_UPD_EN_V2 BIT(19) #define MTK_LRO_ALT_PKT_CNT_MODE BIT(21) #define MTK_LRO_RING_RELINQUISH_REQ (0x7 << 26) +#define MTK_LRO_RING_RELINQUISH_REQ_V2 (0xf << 24) #define MTK_LRO_RING_RELINQUISH_DONE (0x7 << 29) +#define MTK_LRO_RING_RELINQUISH_DONE_V2 (0xf << 28) #define MTK_PDMA_LRO_CTRL_DW1 0x984 #define MTK_PDMA_LRO_CTRL_DW2 0x988 @@ -180,6 +210,13 @@ #define MTK_TX_DMA_EN BIT(0) #define MTK_DMA_BUSY_TIMEOUT_US 1000000 +/* QDMA V2 Global Configuration Register */ +#define MTK_CHK_DDONE_EN BIT(28) +#define MTK_DMAD_WR_WDONE BIT(26) +#define MTK_WCOMP_EN BIT(24) +#define MTK_RESV_BUF (0x40 << 16) +#define MTK_MUTLI_CNT (0x4 << 12) + /* QDMA Flow Control Register */ #define FC_THRES_DROP_MODE BIT(20) #define FC_THRES_DROP_EN (7 << 16) @@ -199,11 +236,32 @@ #define MTK_RX_DONE_INT MTK_RX_DONE_DLY #define MTK_TX_DONE_INT MTK_TX_DONE_DLY +#define MTK_RX_DONE_INT_V2 BIT(14) + /* QDMA Interrupt grouping registers */ #define MTK_RLS_DONE_INT BIT(0) #define MTK_STAT_OFFSET 0x40 +/* QDMA TX NUM */ +#define MTK_QDMA_TX_NUM 16 +#define MTK_QDMA_TX_MASK (MTK_QDMA_TX_NUM - 1) +#define QID_BITS_V2(x) (((x) & 0x3f) << 16) +#define MTK_QDMA_GMAC2_QID 8 + +#define MTK_TX_DMA_BUF_SHIFT 8 + +/* QDMA V2 descriptor txd6 */ +#define TX_DMA_INS_VLAN_V2 BIT(16) +/* QDMA V2 descriptor txd5 */ +#define TX_DMA_CHKSUM_V2 (0x7 << 28) +#define TX_DMA_TSO_V2 BIT(31) + +/* QDMA V2 descriptor txd4 */ +#define TX_DMA_FPORT_SHIFT_V2 8 +#define TX_DMA_FPORT_MASK_V2 0xf +#define TX_DMA_SWC_V2 BIT(30) + #define MTK_WDMA0_BASE 0x2800 #define MTK_WDMA1_BASE 0x2c00 @@ -217,10 +275,9 @@ /* QDMA descriptor txd3 */ #define TX_DMA_OWNER_CPU BIT(31) #define TX_DMA_LS0 BIT(30) -#define TX_DMA_PLEN0(_x) (((_x) & MTK_TX_DMA_BUF_LEN) << 16) -#define TX_DMA_PLEN1(_x) ((_x) & MTK_TX_DMA_BUF_LEN) +#define TX_DMA_PLEN0(x) (((x) & eth->soc->txrx.dma_max_len) << eth->soc->txrx.dma_len_offset) +#define TX_DMA_PLEN1(x) ((x) & eth->soc->txrx.dma_max_len) #define TX_DMA_SWC BIT(14) -#define TX_DMA_SDL(_x) (((_x) & 0x3fff) << 16) /* PDMA on MT7628 */ #define TX_DMA_DONE BIT(31) @@ -230,12 +287,14 @@ /* QDMA descriptor rxd2 */ #define RX_DMA_DONE BIT(31) #define RX_DMA_LSO BIT(30) -#define RX_DMA_PLEN0(_x) (((_x) & 0x3fff) << 16) -#define RX_DMA_GET_PLEN0(_x) (((_x) >> 16) & 0x3fff) +#define RX_DMA_PREP_PLEN0(x) (((x) & eth->soc->txrx.dma_max_len) << eth->soc->txrx.dma_len_offset) +#define RX_DMA_GET_PLEN0(x) (((x) >> eth->soc->txrx.dma_len_offset) & eth->soc->txrx.dma_max_len) #define RX_DMA_VTAG BIT(15) /* QDMA descriptor rxd3 */ -#define RX_DMA_VID(_x) ((_x) & 0xfff) +#define RX_DMA_VID(x) ((x) & VLAN_VID_MASK) +#define RX_DMA_TCI(x) ((x) & (VLAN_PRIO_MASK | VLAN_VID_MASK)) +#define RX_DMA_VPID(x) (((x) >> 16) & 0xffff) /* QDMA descriptor rxd4 */ #define MTK_RXD4_FOE_ENTRY GENMASK(13, 0) @@ -246,10 +305,15 @@ /* QDMA descriptor rxd4 */ #define RX_DMA_L4_VALID BIT(24) #define RX_DMA_L4_VALID_PDMA BIT(30) /* when PDMA is used */ -#define RX_DMA_FPORT_SHIFT 19 -#define RX_DMA_FPORT_MASK 0x7 #define RX_DMA_SPECIAL_TAG BIT(22) +#define RX_DMA_GET_SPORT(x) (((x) >> 19) & 0xf) +#define RX_DMA_GET_SPORT_V2(x) (((x) >> 26) & 0x7) + +/* PDMA V2 descriptor rxd3 */ +#define RX_DMA_VTAG_V2 BIT(0) +#define RX_DMA_L4_VALID_V2 BIT(2) + /* PHY Indirect Access Control registers */ #define MTK_PHY_IAC 0x10004 #define PHY_IAC_ACCESS BIT(31) @@ -372,6 +436,16 @@ #define ETHSYS_TRGMII_MT7621_APLL BIT(6) #define ETHSYS_TRGMII_MT7621_DDR_PLL BIT(5) +/* ethernet reset control register */ +#define ETHSYS_RSTCTRL 0x34 +#define RSTCTRL_FE BIT(6) +#define RSTCTRL_PPE BIT(31) +#define RSTCTRL_PPE1 BIT(30) +#define RSTCTRL_ETH BIT(23) + +/* ethernet reset check idle register */ +#define ETHSYS_FE_RST_CHK_IDLE_EN 0x28 + /* ethernet reset control register */ #define ETHSYS_RSTCTRL 0x34 #define RSTCTRL_FE BIT(6) @@ -457,6 +531,17 @@ struct mtk_rx_dma { unsigned int rxd4; } __packed __aligned(4); +struct mtk_rx_dma_v2 { + unsigned int rxd1; + unsigned int rxd2; + unsigned int rxd3; + unsigned int rxd4; + unsigned int rxd5; + unsigned int rxd6; + unsigned int rxd7; + unsigned int rxd8; +} __packed __aligned(4); + struct mtk_tx_dma { unsigned int txd1; unsigned int txd2; @@ -464,6 +549,17 @@ struct mtk_tx_dma { unsigned int txd4; } __packed __aligned(4); +struct mtk_tx_dma_v2 { + unsigned int txd1; + unsigned int txd2; + unsigned int txd3; + unsigned int txd4; + unsigned int txd5; + unsigned int txd6; + unsigned int txd7; + unsigned int txd8; +} __packed __aligned(4); + struct mtk_eth; struct mtk_mac; @@ -650,7 +746,9 @@ enum mkt_eth_capabilities { MTK_SHARED_INT_BIT, MTK_TRGMII_MT7621_CLK_BIT, MTK_QDMA_BIT, + MTK_NETSYS_V2_BIT, MTK_SOC_MT7628_BIT, + MTK_RSTCTRL_PPE1_BIT, /* MUX BITS*/ MTK_ETH_MUX_GDM1_TO_GMAC1_ESW_BIT, @@ -682,7 +780,9 @@ enum mkt_eth_capabilities { #define MTK_SHARED_INT BIT(MTK_SHARED_INT_BIT) #define MTK_TRGMII_MT7621_CLK BIT(MTK_TRGMII_MT7621_CLK_BIT) #define MTK_QDMA BIT(MTK_QDMA_BIT) +#define MTK_NETSYS_V2 BIT(MTK_NETSYS_V2_BIT) #define MTK_SOC_MT7628 BIT(MTK_SOC_MT7628_BIT) +#define MTK_RSTCTRL_PPE1 BIT(MTK_RSTCTRL_PPE1_BIT) #define MTK_ETH_MUX_GDM1_TO_GMAC1_ESW \ BIT(MTK_ETH_MUX_GDM1_TO_GMAC1_ESW_BIT) @@ -759,6 +859,7 @@ struct mtk_tx_dma_desc_info { dma_addr_t addr; u32 size; u16 vlan_tci; + u16 qid; u8 gso:1; u8 csum:1; u8 vlan:1; @@ -816,6 +917,10 @@ struct mtk_reg_map { * the extra setup for those pins used by GMAC. * @txd_size Tx DMA descriptor size. * @rxd_size Rx DMA descriptor size. + * @rx_irq_done_mask Rx irq done register mask. + * @rx_dma_l4_valid Rx DMA valid register mask. + * @dma_max_len Max DMA tx/rx buffer length. + * @dma_len_offset Tx/Rx DMA length field offset. */ struct mtk_soc_data { const struct mtk_reg_map *reg_map; @@ -828,6 +933,10 @@ struct mtk_soc_data { struct { u32 txd_size; u32 rxd_size; + u32 rx_irq_done_mask; + u32 rx_dma_l4_valid; + u32 dma_max_len; + u32 dma_len_offset; } txrx; }; @@ -947,7 +1056,6 @@ struct mtk_eth { u32 tx_bytes; struct dim tx_dim; - u32 rx_dma_l4_valid; int ip_align; struct mtk_ppe *ppe; From patchwork Fri May 20 18:11:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 574663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BCB6C433FE for ; Fri, 20 May 2022 18:13:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352472AbiETSNK (ORCPT ); Fri, 20 May 2022 14:13:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349265AbiETSNI (ORCPT ); Fri, 20 May 2022 14:13:08 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BE8F18DADE; Fri, 20 May 2022 11:13:07 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9D9EDB82D4B; Fri, 20 May 2022 18:13:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F6ADC36AE3; Fri, 20 May 2022 18:13:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070384; bh=aES4gNLnyZ4BVRsMp9P1u7euxX04IycxzdyraeSQOSM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ghcXuKdWzdMMwZ0RtPCabBu2mE5h0JSpgrLRDqSmTjC3QxNeMypBeXBvDE2Kt7VpO dDFKxe7m2tNuBvH+V9T9mftL/fi2LO4A6Jzn0LkqVXrJFAeqsM50iJJyu0sk0zfxCU s51fkZh6cU6vlIKAd8YPZa8UYk9R8vo432iLJx6eRECAYEq3a85N6ds1fsqe+ne8zN d8ym4YkEuYtttZAxHX67Z2I9UkH6GjB4Q6d2QknuFLg1tQVQYey+uvYuVbrn9uZH1Z b0kALBPyE58XVnoKL8DWnig5qrdZHgHEOJdjpTjDkZtle2nf7WainwoyLh3t3pypfh InnaffFJk6f8g== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 14/16] net: ethernet: mtk_eth_soc: convert ring dma pointer to void Date: Fri, 20 May 2022 20:11:37 +0200 Message-Id: <612d9c41954ab4ad719e4cffb01b2d411a63c592.1653069056.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Simplify the code converting {tx,rx} ring dma pointer to void Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 32 +++++++++------------ drivers/net/ethernet/mediatek/mtk_eth_soc.h | 4 +-- 2 files changed, 16 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 9169f6360ab5..64c201e763c3 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -917,18 +917,15 @@ static int mtk_init_fq_dma(struct mtk_eth *eth) return 0; } -static inline void *mtk_qdma_phys_to_virt(struct mtk_tx_ring *ring, u32 desc) +static void *mtk_qdma_phys_to_virt(struct mtk_tx_ring *ring, u32 desc) { - void *ret = ring->dma; - - return ret + (desc - ring->phys); + return ring->dma + (desc - ring->phys); } static struct mtk_tx_buf *mtk_desc_to_tx_buf(struct mtk_tx_ring *ring, - struct mtk_tx_dma *txd, - u32 txd_size) + void *txd, u32 txd_size) { - int idx = ((void *)txd - (void *)ring->dma) / txd_size; + int idx = (txd - ring->dma) / txd_size; return &ring->buf[idx]; } @@ -936,13 +933,12 @@ static struct mtk_tx_buf *mtk_desc_to_tx_buf(struct mtk_tx_ring *ring, static struct mtk_tx_dma *qdma_to_pdma(struct mtk_tx_ring *ring, struct mtk_tx_dma *dma) { - return ring->dma_pdma - ring->dma + dma; + return ring->dma_pdma - (struct mtk_tx_dma *)ring->dma + dma; } -static int txd_to_idx(struct mtk_tx_ring *ring, struct mtk_tx_dma *dma, - u32 txd_size) +static int txd_to_idx(struct mtk_tx_ring *ring, void *dma, u32 txd_size) { - return ((void *)dma - (void *)ring->dma) / txd_size; + return (dma - ring->dma) / txd_size; } static void mtk_tx_unmap(struct mtk_eth *eth, struct mtk_tx_buf *tx_buf, @@ -1359,7 +1355,7 @@ static struct mtk_rx_ring *mtk_get_rx_ring(struct mtk_eth *eth) ring = ð->rx_ring[i]; idx = NEXT_DESP_IDX(ring->calc_idx, ring->dma_size); - rxd = (void *)ring->dma + idx * eth->soc->txrx.rxd_size; + rxd = ring->dma + idx * eth->soc->txrx.rxd_size; if (rxd->rxd2 & RX_DMA_DONE) { ring->calc_idx_update = true; return ring; @@ -1411,7 +1407,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, goto rx_done; idx = NEXT_DESP_IDX(ring->calc_idx, ring->dma_size); - rxd = (void *)ring->dma + idx * eth->soc->txrx.rxd_size; + rxd = ring->dma + idx * eth->soc->txrx.rxd_size; data = ring->data[idx]; if (!mtk_rx_get_desc(eth, &trxd, rxd)) @@ -1615,7 +1611,7 @@ static int mtk_poll_tx_pdma(struct mtk_eth *eth, int budget, mtk_tx_unmap(eth, tx_buf, true); - desc = (void *)ring->dma + cpu * eth->soc->txrx.txd_size; + desc = ring->dma + cpu * eth->soc->txrx.txd_size; ring->last_free = desc; atomic_inc(&ring->free_count); @@ -1760,7 +1756,7 @@ static int mtk_tx_alloc(struct mtk_eth *eth) int next = (i + 1) % MTK_DMA_SIZE; u32 next_ptr = ring->phys + next * sz; - txd = (void *)ring->dma + i * sz; + txd = ring->dma + i * sz; txd->txd2 = next_ptr; txd->txd3 = TX_DMA_LS0 | TX_DMA_OWNER_CPU; txd->txd4 = 0; @@ -1790,7 +1786,7 @@ static int mtk_tx_alloc(struct mtk_eth *eth) ring->dma_size = MTK_DMA_SIZE; atomic_set(&ring->free_count, MTK_DMA_SIZE - 2); - ring->next_free = &ring->dma[0]; + ring->next_free = ring->dma; ring->last_free = (void *)txd; ring->last_free_ptr = (u32)(ring->phys + ((MTK_DMA_SIZE - 1) * sz)); ring->thresh = MAX_SKB_FRAGS; @@ -1902,7 +1898,7 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) if (unlikely(dma_mapping_error(eth->dma_dev, dma_addr))) return -ENOMEM; - rxd = (void *)ring->dma + i * eth->soc->txrx.rxd_size; + rxd = ring->dma + i * eth->soc->txrx.rxd_size; rxd->rxd1 = (unsigned int)dma_addr; if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) @@ -1964,7 +1960,7 @@ static void mtk_rx_clean(struct mtk_eth *eth, struct mtk_rx_ring *ring) if (!ring->data[i]) continue; - rxd = (void *)ring->dma + i * eth->soc->txrx.rxd_size; + rxd = ring->dma + i * eth->soc->txrx.rxd_size; if (!rxd->rxd1) continue; diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 1db5424edee5..f53024682698 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -692,7 +692,7 @@ struct mtk_tx_buf { * are present */ struct mtk_tx_ring { - struct mtk_tx_dma *dma; + void *dma; struct mtk_tx_buf *buf; dma_addr_t phys; struct mtk_tx_dma *next_free; @@ -722,7 +722,7 @@ enum mtk_rx_flags { * @calc_idx: The current head of ring */ struct mtk_rx_ring { - struct mtk_rx_dma *dma; + void *dma; u8 **data; dma_addr_t phys; u16 frag_size; From patchwork Fri May 20 18:11:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 575140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17C97C4332F for ; Fri, 20 May 2022 18:13:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352453AbiETSNO (ORCPT ); Fri, 20 May 2022 14:13:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352476AbiETSNL (ORCPT ); Fri, 20 May 2022 14:13:11 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 549A818DAC9; Fri, 20 May 2022 11:13:10 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id F0919B82A55; Fri, 20 May 2022 18:13:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD201C34100; Fri, 20 May 2022 18:13:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070387; bh=517w0061VhT0H/2QvPkC4TTyfnG9lp0U3TLqta29KZ8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mU4DXDIFZk/KjH0I05ydJghpxRix8vZ/C/1zWjsaIQz9EJsrKCrr2oBTG3R+A6VOl FF20FKhIU057fv4aou8hkX95C7Cu+TISGrv6EqffpI/WpLppIgef/L+qA4HpeYRXyQ XsCnyinlT8Fxf5C2jWpl6MKsBH8TOfrZyothyVh0yvt8gUGoApNGatO9MejkXzbAvt 04HHSHcQwh4BGVBoBF5+75ihBo9TuGTXVSlR5/wU5qkaomoTS77JBQgu3Mxim9Wd/X Iv5SYAxwPRri8Ek8oyLiGieJP21R2cbEHrb8iYqkiAI+CuZCE/eqj2O58tuv4cn7vG IiC0IfYrl9tAw== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 15/16] net: ethernet: mtk_eth_soc: convert scratch_ring pointer to void Date: Fri, 20 May 2022 20:11:38 +0200 Message-Id: <830515be544d4106a52102c15da6e7df3d1e04ab.1653069056.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Simplify the code converting scratch_ring pointer to void Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 2 +- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 64c201e763c3..c034fd90dbdc 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -893,7 +893,7 @@ static int mtk_init_fq_dma(struct mtk_eth *eth) for (i = 0; i < cnt; i++) { struct mtk_tx_dma_v2 *txd; - txd = (void *)eth->scratch_ring + i * soc->txrx.txd_size; + txd = eth->scratch_ring + i * soc->txrx.txd_size; txd->txd1 = dma_addr + i * MTK_QDMA_PAGE_SIZE; if (i < cnt - 1) txd->txd2 = eth->phy_scratch_ring + diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index f53024682698..67482124de2a 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -1033,7 +1033,7 @@ struct mtk_eth { struct mtk_rx_ring rx_ring_qdma; struct napi_struct tx_napi; struct napi_struct rx_napi; - struct mtk_tx_dma *scratch_ring; + void *scratch_ring; dma_addr_t phy_scratch_ring; void *scratch_head; struct clk *clks[MTK_CLK_MAX]; From patchwork Fri May 20 18:11:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 575139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EEEFC433F5 for ; Fri, 20 May 2022 18:13:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348377AbiETSNP (ORCPT ); Fri, 20 May 2022 14:13:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352478AbiETSNN (ORCPT ); Fri, 20 May 2022 14:13:13 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F02DB18DAC9; Fri, 20 May 2022 11:13:11 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8BE7A60FF1; Fri, 20 May 2022 18:13:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 237DAC385A9; Fri, 20 May 2022 18:13:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653070391; bh=E8tljpchCGmJskUxuHASB8oeOunLLFM4mfYDUlKkfUQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fSVJtNXtKKTRpmFiZQ5fxnbktiLDrKGXjQOW/ju9nFLwyVuAVC+pK0v592yrnoDWd xtHF71AutTyBcQKLbpN6vcUB+YAeB9oj5ws7L6fCFjaFPXrE601tRidDyCnBXJBXof xAHPorJ0vslCWMHMjIXr23bN0nU6oHM9HbCvA/TulhinuMXaabHZ00xoQ4/HZsiMhX 1CckL4ah/bUv3KLKB76pcuAmrewuIyXrhtQ4UrgIYhho9AutoZ7DMBxCbyX/TgGHSY tV5ny4tFvsuM49/PsYqboAfCgKD/Em8I7zypSz9BdP+Mds8/dQVyNelYZYBtkfJPr2 A1WyoVWKAUXaw== From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: nbd@nbd.name, john@phrozen.org, sean.wang@mediatek.com, Mark-MC.Lee@mediatek.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Sam.Shih@mediatek.com, linux-mediatek@lists.infradead.org, devicetree@vger.kernel.org, robh@kernel.org, lorenzo.bianconi@redhat.com Subject: [PATCH v3 net-next 16/16] net: ethernet: mtk_eth_soc: introduce support for mt7986 chipset Date: Fri, 20 May 2022 20:11:39 +0200 Message-Id: <7bafe3ef8140b8743add6350ee2f8faefc858f1c.1653069056.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Add support for mt7986-eth driver available on mt7986 soc. Tested-by: Sam Shih Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 55 ++++++++++++++++++++- drivers/net/ethernet/mediatek/mtk_eth_soc.h | 18 +++++++ 2 files changed, 72 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index c034fd90dbdc..a9d4fd8945bb 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -87,6 +87,43 @@ static const struct mtk_reg_map mt7628_reg_map = { }, }; +static const struct mtk_reg_map mt7986_reg_map = { + .tx_irq_mask = 0x461c, + .tx_irq_status = 0x4618, + .pdma = { + .rx_ptr = 0x6100, + .rx_cnt_cfg = 0x6104, + .pcrx_ptr = 0x6108, + .glo_cfg = 0x6204, + .rst_idx = 0x6208, + .delay_irq = 0x620c, + .irq_status = 0x6220, + .irq_mask = 0x6228, + .int_grp = 0x6250, + }, + .qdma = { + .qtx_cfg = 0x4400, + .rx_ptr = 0x4500, + .rx_cnt_cfg = 0x4504, + .qcrx_ptr = 0x4508, + .glo_cfg = 0x4604, + .rst_idx = 0x4608, + .delay_irq = 0x460c, + .fc_th = 0x4610, + .int_grp = 0x4620, + .hred = 0x4644, + .ctx_ptr = 0x4700, + .dtx_ptr = 0x4704, + .crx_ptr = 0x4710, + .drx_ptr = 0x4714, + .fq_head = 0x4720, + .fq_tail = 0x4724, + .fq_count = 0x4728, + .fq_blen = 0x472c, + }, + .gdm1_cnt = 0x1c00, +}; + /* strings used by ethtool */ static const struct mtk_ethtool_stats { char str[ETH_GSTRING_LEN]; @@ -110,7 +147,7 @@ static const char * const mtk_clks_source_name[] = { "ethif", "sgmiitop", "esw", "gp0", "gp1", "gp2", "fe", "trgpll", "sgmii_tx250m", "sgmii_rx250m", "sgmii_cdr_ref", "sgmii_cdr_fb", "sgmii2_tx250m", "sgmii2_rx250m", "sgmii2_cdr_ref", "sgmii2_cdr_fb", - "sgmii_ck", "eth2pll", + "sgmii_ck", "eth2pll", "wocpu0", "wocpu1", "netsys0", "netsys1" }; void mtk_w32(struct mtk_eth *eth, u32 val, unsigned reg) @@ -3694,6 +3731,21 @@ static const struct mtk_soc_data mt7629_data = { }, }; +static const struct mtk_soc_data mt7986_data = { + .reg_map = &mt7986_reg_map, + .ana_rgc3 = 0x128, + .caps = MT7986_CAPS, + .required_clks = MT7986_CLKS_BITMAP, + .required_pctl = false, + .txrx = { + .txd_size = sizeof(struct mtk_tx_dma_v2), + .rxd_size = sizeof(struct mtk_rx_dma_v2), + .rx_irq_done_mask = MTK_RX_DONE_INT_V2, + .dma_max_len = MTK_TX_DMA_BUF_LEN_V2, + .dma_len_offset = 8, + }, +}; + static const struct mtk_soc_data rt5350_data = { .reg_map = &mt7628_reg_map, .caps = MT7628_CAPS, @@ -3716,6 +3768,7 @@ const struct of_device_id of_mtk_match[] = { { .compatible = "mediatek,mt7622-eth", .data = &mt7622_data}, { .compatible = "mediatek,mt7623-eth", .data = &mt7623_data}, { .compatible = "mediatek,mt7629-eth", .data = &mt7629_data}, + { .compatible = "mediatek,mt7986-eth", .data = &mt7986_data}, { .compatible = "ralink,rt5350-eth", .data = &rt5350_data}, {}, }; diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h index 67482124de2a..0a632896451a 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h @@ -627,6 +627,10 @@ enum mtk_clks_map { MTK_CLK_SGMII2_CDR_FB, MTK_CLK_SGMII_CK, MTK_CLK_ETH2PLL, + MTK_CLK_WOCPU0, + MTK_CLK_WOCPU1, + MTK_CLK_NETSYS0, + MTK_CLK_NETSYS1, MTK_CLK_MAX }; @@ -657,6 +661,16 @@ enum mtk_clks_map { BIT(MTK_CLK_SGMII2_CDR_FB) | \ BIT(MTK_CLK_SGMII_CK) | \ BIT(MTK_CLK_ETH2PLL) | BIT(MTK_CLK_SGMIITOP)) +#define MT7986_CLKS_BITMAP (BIT(MTK_CLK_FE) | BIT(MTK_CLK_GP2) | BIT(MTK_CLK_GP1) | \ + BIT(MTK_CLK_WOCPU1) | BIT(MTK_CLK_WOCPU0) | \ + BIT(MTK_CLK_SGMII_TX_250M) | \ + BIT(MTK_CLK_SGMII_RX_250M) | \ + BIT(MTK_CLK_SGMII_CDR_REF) | \ + BIT(MTK_CLK_SGMII_CDR_FB) | \ + BIT(MTK_CLK_SGMII2_TX_250M) | \ + BIT(MTK_CLK_SGMII2_RX_250M) | \ + BIT(MTK_CLK_SGMII2_CDR_REF) | \ + BIT(MTK_CLK_SGMII2_CDR_FB)) enum mtk_dev_state { MTK_HW_INIT, @@ -855,6 +869,10 @@ enum mkt_eth_capabilities { MTK_MUX_U3_GMAC2_TO_QPHY | \ MTK_MUX_GMAC12_TO_GEPHY_SGMII | MTK_QDMA) +#define MT7986_CAPS (MTK_GMAC1_SGMII | MTK_GMAC2_SGMII | \ + MTK_MUX_GMAC12_TO_GEPHY_SGMII | MTK_QDMA | \ + MTK_NETSYS_V2 | MTK_RSTCTRL_PPE1) + struct mtk_tx_dma_desc_info { dma_addr_t addr; u32 size;