From patchwork Tue May 13 09:58:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luo Jie X-Patchwork-Id: 889670 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A94EB23C50A; Tue, 13 May 2025 09:59:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130342; cv=none; b=QDMvrbOXqfV8sCazwZoP+hnTJuNdKrck5xoMLhFR8AHX34LarZ5rDscjc5brSRTBl4zjdp4yLaPw8dsE/LNu4Kgwy736Ox97K4uVm0gPbf0AsZsx6OqrocHvJdghOUrGAMC7c/L+RdnO5OgxV1YrFA7S7I9GSSHFkJqhK94ytP4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130342; c=relaxed/simple; bh=eKetsvsMaOxTaVAYFMuqjN043togvKBZej/RPXkHcEg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=QbGnPHfYSMF160lImaMlJ3S4mryRZPQhgt32tOYIXMDTFN0fa0i0A90gK+kN/wo1TiD1MqocsBFe7adK7WIfcXOc3dW3LlXscK1fyoIvd+TL1lVNm1ds47I1vnYWZPie752+Tg4KkB9z12Pbl21bL6ZSPoJDq/1iuBm9uXdtGKg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=EqCVW57+; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="EqCVW57+" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54D6F3Qg029694; Tue, 13 May 2025 09:58:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= T3d0Ybrd/X7HAfMU06/8nI4JWqRwxp/eM/eIAa5glYI=; b=EqCVW57+TR5NXhdi yI7yhnDyckYeMo1tXK0C+DDCULRj+32NwEA4W5JOvz/XnTO/YAXAzBCZdtatm4cN ULHqsqMRocgf3oofMAQHtm1ld7UdLWM456rONB3uajNKockaUi0M7EZ3R5vyjJpe cx8/mpwZtVod8S9H4v2y15SdZ7HfLvxOspTjPC265tvBUnZSKLxCXMTl8fWRrq0K NwfC9B5knKQSn5k/E4OsyN4mEc/5BA0Ac1wX84PqBBP0/pyILu4XJxwGU/VNjkmQ ZiX8K0R9x/WOz5IcFNeC0ylLNyv3i5omN38ZDJ5e4mu/EGLnlMuuRFxWWTxmBDoN MMoXEQ== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 46hvghfjnd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:58:43 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 54D9wg5m006597 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:58:42 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 13 May 2025 02:58:36 -0700 From: Luo Jie Date: Tue, 13 May 2025 17:58:21 +0800 Subject: [PATCH net-next v4 01/14] dt-bindings: net: Add PPE for Qualcomm IPQ9574 SoC Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250513-qcom_ipq_ppe-v4-1-4fbe40cbbb71@quicinc.com> References: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> In-Reply-To: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747130310; l=17373; i=quic_luoj@quicinc.com; s=20250209; h=from:subject:message-id; bh=eKetsvsMaOxTaVAYFMuqjN043togvKBZej/RPXkHcEg=; b=Uwek6L9b4tQkqNBXGKNQDbUFgwGBwTELp9CWC5Vn+lJVcdOqqnx30N/vroIG993xMFTVCfhxi 4J5AuaMajGxB8SgqrL6DRYAn9wFMINCRqHwbFr5gWmb3048a1zZSoYQ X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=pzwy8bU5tJZ5UKGTv28n+QOuktaWuriznGmriA9Qkfc= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 5Bu6yQHp8l-9LiGAb-fLpCcu3HLyHJsZ X-Proofpoint-ORIG-GUID: 5Bu6yQHp8l-9LiGAb-fLpCcu3HLyHJsZ X-Authority-Analysis: v=2.4 cv=AMDybF65 c=1 sm=1 tr=0 ts=682317d3 cx=c_pps a=JYp8KDb2vCoCEuGobkYCKw==:117 a=JYp8KDb2vCoCEuGobkYCKw==:17 a=GEpy-HfZoHoA:10 a=IkcTkHD0fZMA:10 a=dt9VzEwgFbYA:10 a=gEfo2CItAAAA:8 a=COk6AnOGAAAA:8 a=Dp9n6NpNsZXrJzVADQAA:9 a=QEXdDO2ut3YA:10 a=sptkURWiP4Gy88Gu7hUp:22 a=TjNXssC_j7lpFel5tvFf:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTEzMDA5NCBTYWx0ZWRfX9BuAFjO6VIzB 4JplzM61HSiUG8Nh/bk1+6YMRH/wca42F8JvbB13RLHTOxcQvm0GKopp9DrRiAgXcEv/41IZb67 9/zl8GEtGfW4UkXAaw6N7yahDmmxB8aaKA1Yal1cYAbaAQwwNtl4r90bsvFmYdn3mtfwx0pbH0z Q0lQ4JQuopbD8GyxGT8DNALLJOonayKo3aUQJT0M/Wvw3YN0UB/X4awOtww5ITxSHg1IyyFZ4zt nTjbOZEnXT7ixSjLUtvzxCV6yUACajv1R5fnCSaO4flziN5ztfDrxDZiIgvoAlEODSmtGNy0yfj 1IUHtWBTjYx5ktgQ/KYVtqIrLmHnX2RjDrMZQRkR1Myss79s69GHMABIuBK9SeuG2hAyy/37ZSs EcCYzqNf3ahi8z9S/HlM5FqeWw7UaJAsDmTOPiITO1QcC9kgH/E7dzQRzhGX9aSYvdqE32Jv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-12_07,2025-05-09_01,2025-02-21_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 spamscore=0 priorityscore=1501 adultscore=0 impostorscore=0 phishscore=0 malwarescore=0 bulkscore=0 lowpriorityscore=0 mlxscore=0 mlxlogscore=999 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2504070000 definitions=main-2505130094 The PPE (packet process engine) hardware block is available in Qualcomm IPQ chipsets that support PPE architecture, such as IPQ9574. The PPE in the IPQ9574 SoC includes six ethernet ports (6 GMAC and 6 XGMAC), which are used to connect with external PHY devices by PCS. It includes an L2 switch function for bridging packets among the 6 ethernet ports and the CPU port. The CPU port enables packet transfer between the ethernet ports and the ARM cores in the SoC, using the ethernet DMA. The PPE also includes packet processing offload capabilities for various networking functions such as route and bridge flows, VLANs, different tunnel protocols and VPN. Signed-off-by: Luo Jie --- .../devicetree/bindings/net/qcom,ipq9574-ppe.yaml | 406 +++++++++++++++++++++ 1 file changed, 406 insertions(+) diff --git a/Documentation/devicetree/bindings/net/qcom,ipq9574-ppe.yaml b/Documentation/devicetree/bindings/net/qcom,ipq9574-ppe.yaml new file mode 100644 index 000000000000..f36f4d180674 --- /dev/null +++ b/Documentation/devicetree/bindings/net/qcom,ipq9574-ppe.yaml @@ -0,0 +1,406 @@ +# SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/net/qcom,ipq9574-ppe.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Qualcomm IPQ packet process engine (PPE) + +maintainers: + - Luo Jie + - Lei Wei + - Suruchi Agarwal + - Pavithra R > + +description: + The Ethernet functionality in the PPE (Packet Process Engine) is comprised + of three components, the switch core, port wrapper and Ethernet DMA. + + The Switch core in the IPQ9574 PPE has maximum of 6 front panel ports and + two FIFO interfaces. One of the two FIFO interfaces is used for Ethernet + port to host CPU communication using Ethernet DMA. The other is used + communicating to the EIP engine which is used for IPsec offload. On the + IPQ9574, the PPE includes 6 GMAC/XGMACs that can be connected with external + Ethernet PHY. Switch core also includes BM (Buffer Management), QM (Queue + Management) and SCH (Scheduler) modules for supporting the packet processing. + + The port wrapper provides connections from the 6 GMAC/XGMACS to UNIPHY (PCS) + supporting various modes such as SGMII/QSGMII/PSGMII/USXGMII/10G-BASER. There + are 3 UNIPHY (PCS) instances supported on the IPQ9574. + + Ethernet DMA is used to transmit and receive packets between the six Ethernet + ports and ARM host CPU. + + The follow diagram shows the PPE hardware block along with its connectivity + to the external hardware blocks such clock hardware blocks (CMNPLL, GCC, + NSS clock controller) and ethernet PCS/PHY blocks. For depicting the PHY + connectivity, one 4x1 Gbps PHY (QCA8075) and two 10 GBps PHYs are used as an + example. + - | + +---------+ + | 48 MHZ | + +----+----+ + |(clock) + v + +----+----+ + +------| CMN PLL | + | +----+----+ + | |(clock) + | v + | +----+----+ +----+----+ (clock) +----+----+ + | +---| NSSCC | | GCC |--------->| MDIO | + | | +----+----+ +----+----+ +----+----+ + | | |(clock & reset) |(clock) + | | v v + | | +-----------------------------+----------+----------+---------+ + | | | +-----+ |EDMA FIFO | | EIP FIFO| + | | | | SCH | +----------+ +---------+ + | | | +-----+ | | | + | | | +------+ +------+ +-------------------+ | + | | | | BM | | QM | IPQ9574-PPE | L2/L3 Process | | + | | | +------+ +------+ +-------------------+ | + | | | | | + | | | +-------+ +-------+ +-------+ +-------+ +-------+ +-------+ | + | | | | MAC0 | | MAC1 | | MAC2 | | MAC3 | | XGMAC4| |XGMAC5 | | + | | | +---+---+ +---+---+ +---+---+ +---+---+ +---+---+ +---+---+ | + | | | | | | | | | | + | | +-----+---------+---------+---------+---------+---------+-----+ + | | | | | | | | + | | +---+---------+---------+---------+---+ +---+---+ +---+---+ + +--+---->| PCS0 | | PCS1 | | PCS2 | + |(clock) +---+---------+---------+---------+---+ +---+---+ +---+---+ + | | | | | | | + | +---+---------+---------+---------+---+ +---+---+ +---+---+ + +------->| QCA8075 PHY | | PHY4 | | PHY5 | + (clock) +-------------------------------------+ +-------+ +-------+ + +properties: + compatible: + enum: + - qcom,ipq9574-ppe + + reg: + maxItems: 1 + + clocks: + items: + - description: PPE core clock from NSS clock controller + - description: PPE APB (Advanced Peripheral Bus) clock from NSS clock controller + - description: PPE ingress process engine clock from NSS clock controller + - description: PPE BM, QM and scheduler clock from NSS clock controller + + clock-names: + items: + - const: ppe + - const: apb + - const: ipe + - const: btq + + resets: + maxItems: 1 + description: PPE reset, which is necessary before configuring PPE hardware + + interconnects: + items: + - description: Clock path leading to PPE switch core function + - description: Clock path leading to PPE register access + - description: Clock path leading to QoS generation + - description: Clock path leading to timeout reference + - description: Clock path leading to NSS NOC from memory NOC + - description: Clock path leading to memory NOC from NSS NOC + - description: Clock path leading to enhanced memory NOC from NSS NOC + + interconnect-names: + items: + - const: ppe + - const: ppe_cfg + - const: qos_gen + - const: timeout_ref + - const: nssnoc_memnoc + - const: memnoc_nssnoc + - const: memnoc_nssnoc_1 + + ethernet-dma: + type: object + additionalProperties: false + description: + EDMA (Ethernet DMA) is used to transmit packets between PPE and ARM + host CPU. There are 32 TX descriptor rings, 32 TX completion rings, + 24 RX descriptor rings and 8 RX fill rings supported. + + properties: + clocks: + items: + - description: EDMA system clock from NSS Clock Controller + - description: EDMA APB (Advanced Peripheral Bus) clock from + NSS Clock Controller + + clock-names: + items: + - const: sys + - const: apb + + resets: + maxItems: 1 + description: EDMA reset from NSS clock controller + + interrupts: + minItems: 29 + maxItems: 57 + + interrupt-names: + minItems: 29 + maxItems: 57 + items: + pattern: '^(txcmpl_([0-9]|[1-2][0-9]|3[0-1])|rxdesc_([0-9]|1[0-9]|2[0-3])|misc)$' + description: + Interrupts "txcmpl_[0-31]" are the Ethernet DMA Tx completion ring interrupts. + Interrupts "rxdesc_[0-23]" are the Ethernet DMA Rx Descriptor ring interrupts. + Interrupt "misc" is the Ethernet DMA miscellaneous error interrupt. + + required: + - clocks + - clock-names + - resets + - interrupts + - interrupt-names + +required: + - compatible + - reg + - clocks + - clock-names + - resets + - interconnects + - interconnect-names + - ethernet-dma + +allOf: + - $ref: ethernet-switch.yaml + +unevaluatedProperties: false + +examples: + - | + #include + #include + #include + + ethernet-switch@3a000000 { + compatible = "qcom,ipq9574-ppe"; + reg = <0x3a000000 0xbef800>; + clocks = <&nsscc 80>, + <&nsscc 79>, + <&nsscc 81>, + <&nsscc 78>; + clock-names = "ppe", + "apb", + "ipe", + "btq"; + resets = <&nsscc 108>; + interconnects = <&nsscc MASTER_NSSNOC_PPE &nsscc SLAVE_NSSNOC_PPE>, + <&nsscc MASTER_NSSNOC_PPE_CFG &nsscc SLAVE_NSSNOC_PPE_CFG>, + <&gcc MASTER_NSSNOC_QOSGEN_REF &gcc SLAVE_NSSNOC_QOSGEN_REF>, + <&gcc MASTER_NSSNOC_TIMEOUT_REF &gcc SLAVE_NSSNOC_TIMEOUT_REF>, + <&gcc MASTER_MEM_NOC_NSSNOC &gcc SLAVE_MEM_NOC_NSSNOC>, + <&gcc MASTER_NSSNOC_MEMNOC &gcc SLAVE_NSSNOC_MEMNOC>, + <&gcc MASTER_NSSNOC_MEM_NOC_1 &gcc SLAVE_NSSNOC_MEM_NOC_1>; + interconnect-names = "ppe", + "ppe_cfg", + "qos_gen", + "timeout_ref", + "nssnoc_memnoc", + "memnoc_nssnoc", + "memnoc_nssnoc_1"; + + ethernet-dma { + clocks = <&nsscc 77>, + <&nsscc 76>; + clock-names = "sys", + "apb"; + resets = <&nsscc 0>; + interrupts = , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + , + ; + interrupt-names = "txcmpl_8", + "txcmpl_9", + "txcmpl_10", + "txcmpl_11", + "txcmpl_12", + "txcmpl_13", + "txcmpl_14", + "txcmpl_15", + "txcmpl_16", + "txcmpl_17", + "txcmpl_18", + "txcmpl_19", + "txcmpl_20", + "txcmpl_21", + "txcmpl_22", + "txcmpl_23", + "txcmpl_24", + "txcmpl_25", + "txcmpl_26", + "txcmpl_27", + "txcmpl_28", + "txcmpl_29", + "txcmpl_30", + "txcmpl_31", + "rxdesc_20", + "rxdesc_21", + "rxdesc_22", + "rxdesc_23", + "misc"; + }; + + ethernet-ports { + #address-cells = <1>; + #size-cells = <0>; + + port@1 { + reg = <1>; + phy-mode = "qsgmii"; + managed = "in-band-status"; + phy-handle = <&phy0>; + pcs-handle = <&pcs0_mii0>; + clocks = <&nsscc 33>, + <&nsscc 34>, + <&nsscc 37>; + clock-names = "mac", + "rx", + "tx"; + resets = <&nsscc 29>, + <&nsscc 96>, + <&nsscc 97>; + reset-names = "mac", + "rx", + "tx"; + }; + + port@2 { + reg = <2>; + phy-mode = "qsgmii"; + managed = "in-band-status"; + phy-handle = <&phy1>; + pcs-handle = <&pcs0_mii1>; + clocks = <&nsscc 40>, + <&nsscc 41>, + <&nsscc 44>; + clock-names = "mac", + "rx", + "tx"; + resets = <&nsscc 30>, + <&nsscc 98>, + <&nsscc 99>; + reset-names = "mac", + "rx", + "tx"; + }; + + port@3 { + reg = <3>; + phy-mode = "qsgmii"; + managed = "in-band-status"; + phy-handle = <&phy2>; + pcs-handle = <&pcs0_mii2>; + clocks = <&nsscc 47>, + <&nsscc 48>, + <&nsscc 51>; + clock-names = "mac", + "rx", + "tx"; + resets = <&nsscc 31>, + <&nsscc 100>, + <&nsscc 101>; + reset-names = "mac", + "rx", + "tx"; + }; + + port@4 { + reg = <4>; + phy-mode = "qsgmii"; + managed = "in-band-status"; + phy-handle = <&phy3>; + pcs-handle = <&pcs0_mii3>; + clocks = <&nsscc 54>, + <&nsscc 55>, + <&nsscc 58>; + clock-names = "mac", + "rx", + "tx"; + resets = <&nsscc 32>, + <&nsscc 102>, + <&nsscc 103>; + reset-names = "mac", + "rx", + "tx"; + }; + + port@5 { + reg = <5>; + phy-mode = "usxgmii"; + managed = "in-band-status"; + phy-handle = <&phy4>; + pcs-handle = <&pcs1_mii0>; + clocks = <&nsscc 61>, + <&nsscc 62>, + <&nsscc 65>; + clock-names = "mac", + "rx", + "tx"; + resets = <&nsscc 33>, + <&nsscc 104>, + <&nsscc 105>; + reset-names = "mac", + "rx", + "tx"; + }; + + port@6 { + reg = <6>; + phy-mode = "usxgmii"; + managed = "in-band-status"; + phy-handle = <&phy5>; + pcs-handle = <&pcs2_mii0>; + clocks = <&nsscc 68>, + <&nsscc 69>, + <&nsscc 72>; + clock-names = "mac", + "rx", + "tx"; + resets = <&nsscc 34>, + <&nsscc 106>, + <&nsscc 107>; + reset-names = "mac", + "rx", + "tx"; + }; + }; + }; From patchwork Tue May 13 09:58:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luo Jie X-Patchwork-Id: 889669 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F26C23C514; Tue, 13 May 2025 09:59:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130353; cv=none; b=lfDACT4RShPrxSvVqrmb39kycFBb8DCkTGquslnfVbRKDM+R8irb2RWrOHTWmV73zhw3Dmdme1OlkOR8IenpfcH3Mj8cONewNDJQjvVkicJqJFsLYJmjCw6235uNtCJ6LaKnKr4/7LkL1h+V8TRGlyb7waq5lpjYgrJeIMzwWxs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130353; c=relaxed/simple; bh=NW8YrOjm3ufGiO/XWC3WjfxzSmTah3mqD8eXlUgoHZo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=SOc2ih3P+g010yXA+jXGObobdR3YyuIv/52jPnAJFZ9l5tSY8HpwM12LaRcbXhcc3DS0r4OaR1DAJMsxCJx4oUF/l7cKaaReTYWj30s5+cbDR0yauPivcE58O1pnR/l74aTbktmJUd0pLG0SLWDIFRbSq0euFdWfS3//DJJ/928= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=KHzdeeWu; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="KHzdeeWu" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54D7ENKM031649; Tue, 13 May 2025 09:58:55 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= d5AZ6DyNi/E6Xk4/Je0WW91xCbySyFZYV1ssIoTZvfA=; b=KHzdeeWulXBbc4Zp o0ck4qXlj/yvMM2T+knaC2AL8Z4dCNTplQiHxtB371T2i7U6i/rV0Yag6KGF7fgr cgfLp5ZfZeGL5wG0j0a/Pml1FaesEeP8/AfgGc3eVgONsHFPwIa3UNxPTli+LQXR 6r1FWIBw8MBQFYaG3pRUASXHasK3Nu4CODX9IpKujgNpVju2A4gWBe9qy7fe/v0k dIiENx1nKQxri7WWVmTisLBf6PY2ibuhCFQXdPZQomJGvheNP9BH81I5vsxl+wDj AFfmzc4lBwnNJOpBfiBTSrFaJPyLNBg3Jv49dbQYfTj6PcudYmNQxpPttpiTpjaY cNR3kA== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 46hx4k7ace-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:58:54 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 54D9wrqm032636 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:58:53 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 13 May 2025 02:58:47 -0700 From: Luo Jie Date: Tue, 13 May 2025 17:58:23 +0800 Subject: [PATCH net-next v4 03/14] net: ethernet: qualcomm: Add PPE driver for IPQ9574 SoC Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250513-qcom_ipq_ppe-v4-3-4fbe40cbbb71@quicinc.com> References: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> In-Reply-To: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747130310; l=10782; i=quic_luoj@quicinc.com; s=20250209; h=from:subject:message-id; bh=NW8YrOjm3ufGiO/XWC3WjfxzSmTah3mqD8eXlUgoHZo=; b=9FEe22F0bplR0/ngciw8NOomN6FsGNnKI+UYSc/GZXQPgBuRatdze/Q1Q2Hu5CGE72a+LRoLp d1XaL9vhRmcCk3NcoPZtOy3G9DGAxcCDyDVHfsPEiBHJLcl89Mhlr9v X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=pzwy8bU5tJZ5UKGTv28n+QOuktaWuriznGmriA9Qkfc= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Authority-Analysis: v=2.4 cv=ReqQC0tv c=1 sm=1 tr=0 ts=682317de cx=c_pps a=JYp8KDb2vCoCEuGobkYCKw==:117 a=JYp8KDb2vCoCEuGobkYCKw==:17 a=GEpy-HfZoHoA:10 a=IkcTkHD0fZMA:10 a=dt9VzEwgFbYA:10 a=COk6AnOGAAAA:8 a=PqS_UfTMPIexogC6Pa0A:9 a=QEXdDO2ut3YA:10 a=TjNXssC_j7lpFel5tvFf:22 X-Proofpoint-GUID: ENrAVqPrrg2uoIsGyGo69gr09bBNOn8J X-Proofpoint-ORIG-GUID: ENrAVqPrrg2uoIsGyGo69gr09bBNOn8J X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTEzMDA5NCBTYWx0ZWRfXy/E70YUaZH2Z AnQpw4mJWZ+1cr8jdTx8KOAltwL39KZFAnXJqZxQSuHBH4K//PIiJDsLU7Q5oagVBkLu6idDfHE FfY+vl7wA6fHcszF8Bz6M3FQqNPE5pvX9pyuMW2cpWY93FhYeJfmGOBNdxEUjW8r4yqwyh4ewwG pdKT2Y1dqMJhh8FUMcRBkAri7IdRSRDmdwMAP5uOuQ7FcR6Xcz3J/6JHFP3YU0bLY62sxQr1bW2 jlFIA2mF2PvPnEvYIKkY7rKdjp2B+pRL80XYxni4BD/VqF8f23F0jMNLa3+Udq56NIoiCaCZKWl fRUxavHOb9fr0++kkEYzEw8JkFAQ1nN1QTa3phQaAjsme+HQkv40efcN7cbr26j6ciklBOI76ku R/0c/lpb88dpjHzeCZK9OLFYF62K7MaLJjm6sh9JKei6o2YoApzmlbmaI4NnjbbDi6r7SSx3 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-12_07,2025-05-09_01,2025-02-21_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 mlxscore=0 mlxlogscore=999 phishscore=0 suspectscore=0 spamscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 clxscore=1011 adultscore=0 malwarescore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2504070000 definitions=main-2505130094 The PPE (Packet Process Engine) hardware block is available on Qualcomm IPQ SoC that support PPE architecture, such as IPQ9574. The PPE in IPQ9574 includes six integrated ethernet MAC (for 6 PPE ports), buffer management, queue management and scheduler functions. The MACs can connect with the external PHY or switch devices using the UNIPHY PCS block available in the SoC. The PPE also includes various packet processing offload capabilities such as L3 routing and L2 bridging, VLAN and tunnel processing offload. It also includes Ethernet DMA function for transferring packets between ARM cores and PPE ethernet ports. This patch adds the base source files and Makefiles for the PPE driver such as platform driver registration, clock initialization, and PPE reset routines. Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/Kconfig | 15 ++ drivers/net/ethernet/qualcomm/Makefile | 1 + drivers/net/ethernet/qualcomm/ppe/Makefile | 7 + drivers/net/ethernet/qualcomm/ppe/ppe.c | 218 +++++++++++++++++++++++++++++ drivers/net/ethernet/qualcomm/ppe/ppe.h | 36 +++++ 5 files changed, 277 insertions(+) diff --git a/drivers/net/ethernet/qualcomm/Kconfig b/drivers/net/ethernet/qualcomm/Kconfig index 9210ff360fdc..59931c4edbeb 100644 --- a/drivers/net/ethernet/qualcomm/Kconfig +++ b/drivers/net/ethernet/qualcomm/Kconfig @@ -61,6 +61,21 @@ config QCOM_EMAC low power, Receive-Side Scaling (RSS), and IEEE 1588-2008 Precision Clock Synchronization Protocol. +config QCOM_PPE + tristate "Qualcomm Technologies, Inc. PPE Ethernet support" + depends on HAS_IOMEM && OF + depends on COMMON_CLK + select REGMAP_MMIO + help + This driver supports the Qualcomm Technologies, Inc. packet + process engine (PPE) available with IPQ SoC. The PPE includes + the ethernet MACs, Ethernet DMA (EDMA) and switch core that + supports L3 flow offload, L2 switch function, RSS and tunnel + offload. + + To compile this driver as a module, choose M here. The module + will be called qcom-ppe. + source "drivers/net/ethernet/qualcomm/rmnet/Kconfig" endif # NET_VENDOR_QUALCOMM diff --git a/drivers/net/ethernet/qualcomm/Makefile b/drivers/net/ethernet/qualcomm/Makefile index 9250976dd884..166a59aea363 100644 --- a/drivers/net/ethernet/qualcomm/Makefile +++ b/drivers/net/ethernet/qualcomm/Makefile @@ -11,4 +11,5 @@ qcauart-objs := qca_uart.o obj-y += emac/ +obj-$(CONFIG_QCOM_PPE) += ppe/ obj-$(CONFIG_RMNET) += rmnet/ diff --git a/drivers/net/ethernet/qualcomm/ppe/Makefile b/drivers/net/ethernet/qualcomm/ppe/Makefile new file mode 100644 index 000000000000..63d50d3b4f2e --- /dev/null +++ b/drivers/net/ethernet/qualcomm/ppe/Makefile @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Makefile for the device driver of PPE (Packet Process Engine) in IPQ SoC +# + +obj-$(CONFIG_QCOM_PPE) += qcom-ppe.o +qcom-ppe-objs := ppe.o diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c new file mode 100644 index 000000000000..40da7d240594 --- /dev/null +++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c @@ -0,0 +1,218 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +/* PPE platform device probe, DTSI parser and PPE clock initializations. */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ppe.h" + +#define PPE_PORT_MAX 8 +#define PPE_CLK_RATE 353000000 + +/* ICC clocks for enabling PPE device. The avg_bw and peak_bw with value 0 + * will be updated by the clock rate of PPE. + */ +static const struct icc_bulk_data ppe_icc_data[] = { + { + .name = "ppe", + .avg_bw = 0, + .peak_bw = 0, + }, + { + .name = "ppe_cfg", + .avg_bw = 0, + .peak_bw = 0, + }, + { + .name = "qos_gen", + .avg_bw = 6000, + .peak_bw = 6000, + }, + { + .name = "timeout_ref", + .avg_bw = 6000, + .peak_bw = 6000, + }, + { + .name = "nssnoc_memnoc", + .avg_bw = 533333, + .peak_bw = 533333, + }, + { + .name = "memnoc_nssnoc", + .avg_bw = 533333, + .peak_bw = 533333, + }, + { + .name = "memnoc_nssnoc_1", + .avg_bw = 533333, + .peak_bw = 533333, + }, +}; + +static const struct regmap_range ppe_readable_ranges[] = { + regmap_reg_range(0x0, 0x1ff), /* Global */ + regmap_reg_range(0x400, 0x5ff), /* LPI CSR */ + regmap_reg_range(0x1000, 0x11ff), /* GMAC0 */ + regmap_reg_range(0x1200, 0x13ff), /* GMAC1 */ + regmap_reg_range(0x1400, 0x15ff), /* GMAC2 */ + regmap_reg_range(0x1600, 0x17ff), /* GMAC3 */ + regmap_reg_range(0x1800, 0x19ff), /* GMAC4 */ + regmap_reg_range(0x1a00, 0x1bff), /* GMAC5 */ + regmap_reg_range(0xb000, 0xefff), /* PRX CSR */ + regmap_reg_range(0xf000, 0x1efff), /* IPE */ + regmap_reg_range(0x20000, 0x5ffff), /* PTX CSR */ + regmap_reg_range(0x60000, 0x9ffff), /* IPE L2 CSR */ + regmap_reg_range(0xb0000, 0xeffff), /* IPO CSR */ + regmap_reg_range(0x100000, 0x17ffff), /* IPE PC */ + regmap_reg_range(0x180000, 0x1bffff), /* PRE IPO CSR */ + regmap_reg_range(0x1d0000, 0x1dffff), /* Tunnel parser */ + regmap_reg_range(0x1e0000, 0x1effff), /* Ingress parse */ + regmap_reg_range(0x200000, 0x2fffff), /* IPE L3 */ + regmap_reg_range(0x300000, 0x3fffff), /* IPE tunnel */ + regmap_reg_range(0x400000, 0x4fffff), /* Scheduler */ + regmap_reg_range(0x500000, 0x503fff), /* XGMAC0 */ + regmap_reg_range(0x504000, 0x507fff), /* XGMAC1 */ + regmap_reg_range(0x508000, 0x50bfff), /* XGMAC2 */ + regmap_reg_range(0x50c000, 0x50ffff), /* XGMAC3 */ + regmap_reg_range(0x510000, 0x513fff), /* XGMAC4 */ + regmap_reg_range(0x514000, 0x517fff), /* XGMAC5 */ + regmap_reg_range(0x600000, 0x6fffff), /* BM */ + regmap_reg_range(0x800000, 0x9fffff), /* QM */ + regmap_reg_range(0xb00000, 0xbef800), /* EDMA */ +}; + +static const struct regmap_access_table ppe_reg_table = { + .yes_ranges = ppe_readable_ranges, + .n_yes_ranges = ARRAY_SIZE(ppe_readable_ranges), +}; + +static const struct regmap_config regmap_config_ipq9574 = { + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, + .rd_table = &ppe_reg_table, + .wr_table = &ppe_reg_table, + .max_register = 0xbef800, + .fast_io = true, +}; + +static int ppe_clock_init_and_reset(struct ppe_device *ppe_dev) +{ + unsigned long ppe_rate = ppe_dev->clk_rate; + struct device *dev = ppe_dev->dev; + struct reset_control *rstc; + struct clk_bulk_data *clks; + struct clk *clk; + int ret, i; + + for (i = 0; i < ppe_dev->num_icc_paths; i++) { + ppe_dev->icc_paths[i].name = ppe_icc_data[i].name; + ppe_dev->icc_paths[i].avg_bw = ppe_icc_data[i].avg_bw ? : + Bps_to_icc(ppe_rate); + ppe_dev->icc_paths[i].peak_bw = ppe_icc_data[i].peak_bw ? : + Bps_to_icc(ppe_rate); + } + + ret = devm_of_icc_bulk_get(dev, ppe_dev->num_icc_paths, + ppe_dev->icc_paths); + if (ret) + return ret; + + ret = icc_bulk_set_bw(ppe_dev->num_icc_paths, ppe_dev->icc_paths); + if (ret) + return ret; + + /* The PPE clocks have a common parent clock. Setting the clock + * rate of "ppe" ensures the clock rate of all PPE clocks is + * configured to the same rate. + */ + clk = devm_clk_get(dev, "ppe"); + if (IS_ERR(clk)) + return PTR_ERR(clk); + + ret = clk_set_rate(clk, ppe_rate); + if (ret) + return ret; + + ret = devm_clk_bulk_get_all_enabled(dev, &clks); + if (ret < 0) + return ret; + + /* Reset the PPE. */ + rstc = devm_reset_control_get_exclusive(dev, NULL); + if (IS_ERR(rstc)) + return PTR_ERR(rstc); + + ret = reset_control_assert(rstc); + if (ret) + return ret; + + /* The delay 10 ms of assert is necessary for resetting PPE. */ + usleep_range(10000, 11000); + + return reset_control_deassert(rstc); +} + +static int qcom_ppe_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct ppe_device *ppe_dev; + void __iomem *base; + int ret, num_icc; + + num_icc = ARRAY_SIZE(ppe_icc_data); + ppe_dev = devm_kzalloc(dev, struct_size(ppe_dev, icc_paths, num_icc), + GFP_KERNEL); + if (!ppe_dev) + return -ENOMEM; + + base = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(base)) + return dev_err_probe(dev, PTR_ERR(base), "PPE ioremap failed\n"); + + ppe_dev->regmap = devm_regmap_init_mmio(dev, base, ®map_config_ipq9574); + if (IS_ERR(ppe_dev->regmap)) + return dev_err_probe(dev, PTR_ERR(ppe_dev->regmap), + "PPE initialize regmap failed\n"); + ppe_dev->dev = dev; + ppe_dev->clk_rate = PPE_CLK_RATE; + ppe_dev->num_ports = PPE_PORT_MAX; + ppe_dev->num_icc_paths = num_icc; + + ret = ppe_clock_init_and_reset(ppe_dev); + if (ret) + return dev_err_probe(dev, ret, "PPE clock config failed\n"); + + platform_set_drvdata(pdev, ppe_dev); + + return 0; +} + +static const struct of_device_id qcom_ppe_of_match[] = { + { .compatible = "qcom,ipq9574-ppe" }, + {} +}; +MODULE_DEVICE_TABLE(of, qcom_ppe_of_match); + +static struct platform_driver qcom_ppe_driver = { + .driver = { + .name = "qcom_ppe", + .of_match_table = qcom_ppe_of_match, + }, + .probe = qcom_ppe_probe, +}; +module_platform_driver(qcom_ppe_driver); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Qualcomm Technologies, Inc. IPQ PPE driver"); diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.h b/drivers/net/ethernet/qualcomm/ppe/ppe.h new file mode 100644 index 000000000000..cc6767b7c2b8 --- /dev/null +++ b/drivers/net/ethernet/qualcomm/ppe/ppe.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2025 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef __PPE_H__ +#define __PPE_H__ + +#include +#include + +struct device; +struct regmap; + +/** + * struct ppe_device - PPE device private data. + * @dev: PPE device structure. + * @regmap: PPE register map. + * @clk_rate: PPE clock rate. + * @num_ports: Number of PPE ports. + * @num_icc_paths: Number of interconnect paths. + * @icc_paths: Interconnect path array. + * + * PPE device is the instance of PPE hardware, which is used to + * configure PPE packet process modules such as BM (buffer management), + * QM (queue management), and scheduler. + */ +struct ppe_device { + struct device *dev; + struct regmap *regmap; + unsigned long clk_rate; + unsigned int num_ports; + unsigned int num_icc_paths; + struct icc_bulk_data icc_paths[] __counted_by(num_icc_paths); +}; +#endif From patchwork Tue May 13 09:58:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luo Jie X-Patchwork-Id: 889668 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 394DF23D28A; Tue, 13 May 2025 09:59:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130362; cv=none; b=qHECibFmfzdPY9cl2ZdQrqq3wysgf176b8TPxjGAewwuANyN1INMCBk2wZIi0V31nIPr8lNA3+rHRrzBqmZyic9uWE/qqg3h7Ip1d0vPs1070z5sJItGeIjOUtbBhg3tQyxlw+APYOx/itIaE40XFTFALg6EgtNWq5f1iHaidwI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130362; c=relaxed/simple; bh=qUZlMkG8uu9gSoBcy6V9gP2JhlaEapyCRQ2SER6ycXM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=hBXGaGHlcMorhkFHEcM35O+cphYAxV1YEqaHbz0TnyyzPxOSUrbY1Bsy1UUoeDktRHL1adaFosgBETXl1w/dRAqHBX2HCEwE3bQ/ALlSDiB+OY1WoeA3ugd3bgebqMoJKo+tJZtElu9iAoUVqrRaLbbqhTpnILZAoaerytrI0fQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=THiPaSmg; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="THiPaSmg" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54D77CmT017020; Tue, 13 May 2025 09:59:06 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= 0E/8GEAVqnMph8WdOMu15MLwFv0ASCoMQ6G3uoJ0unE=; b=THiPaSmgzkhk1mRV oWEAdmwLLTvYOkBtZbCoygUwf12sw52X5yw1R4AKqufxNoKiOMDjRV0zMIyfL+3t 1UOlVM0iVL0hWz93j3+6UMOxFkJ06nbGiLtbHJpLxNT8C1gtWecwcBCXiXUNIdVW Zj7AUAJbau2w1jlQfz1nguUhq70o2mvFoB57yiwWAjKuvSe7YTfADFG69pAkucdf oEERBhA3ErKLuCr9xzX3z/xXL4qxMd1f5+fL8SBPR4FbKU0pGGynbP+t92uYB4Z+ 2SOmYtL/1xZKgV8fEZAi5kMhl0ESTAvWKAybRuB6loCnDvb/ns2R9zMjX6ihI1PT VxbNOQ== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 46hwt97ab9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:05 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 54D9x4QX017534 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:04 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 13 May 2025 02:58:58 -0700 From: Luo Jie Date: Tue, 13 May 2025 17:58:25 +0800 Subject: [PATCH net-next v4 05/14] net: ethernet: qualcomm: Initialize PPE queue management for IPQ9574 Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250513-qcom_ipq_ppe-v4-5-4fbe40cbbb71@quicinc.com> References: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> In-Reply-To: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747130311; l=12548; i=quic_luoj@quicinc.com; s=20250209; h=from:subject:message-id; bh=qUZlMkG8uu9gSoBcy6V9gP2JhlaEapyCRQ2SER6ycXM=; b=qolQFUFpvrhi3DIFdcYcm1rRjwKkECEjeb/odsmRMDuaZvwh3jsXkeAXTY6bDcS9X6fg/It4i sIO+Kf9P94wDmr9uFW67faiDBQgt0YX6RcrjJG8h19niE+BSw/3p7bW X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=pzwy8bU5tJZ5UKGTv28n+QOuktaWuriznGmriA9Qkfc= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTEzMDA5NCBTYWx0ZWRfX7mLFcsltVH7M blv8mrhNY/gp05xm7wsjW0GhU1fLh06ZsgJyMK5tj40dFrcFc9kaB4lwobPozb1587Wab2JmJkl tHnSe18nQSe+LVYQ171MdSmkSy1iBPK+VDTgA8igZMpm3pmXT3MzxCzwvqqCoGSL0S8upEuaw1y 4FMW/2Il8VsQq0xqrJYjuGkOI6UY4cgT03gD/QMP5eB8Q21k3slAVF3sVTX7WtmAfKj6tB44WN7 fXL5GjblhElUreMh06UFQTCmbM9toMrrzLZl+CIsGeP8OgV+/q19jpbL9ndyAeFsbWwFFMtds3l RQqsrOkALUY64g5L7LlUZpqGJVv19A90mjtMpuuVr8SUM7pkdZVfnPoEkYwz3NawWcsHW5uUL3w QVZ2bVgejCLhADsXKy+LTqHn2wGCC1hBqPU/FpYSXhDM7cVduHNHYIVv/b8j/la513BkhVG0 X-Proofpoint-ORIG-GUID: jrlOa_swy5_ybQz60MsU84jBEOjuSPO- X-Proofpoint-GUID: jrlOa_swy5_ybQz60MsU84jBEOjuSPO- X-Authority-Analysis: v=2.4 cv=a58w9VSF c=1 sm=1 tr=0 ts=682317e9 cx=c_pps a=JYp8KDb2vCoCEuGobkYCKw==:117 a=JYp8KDb2vCoCEuGobkYCKw==:17 a=GEpy-HfZoHoA:10 a=IkcTkHD0fZMA:10 a=dt9VzEwgFbYA:10 a=COk6AnOGAAAA:8 a=wXS57atiiGPxPzAmdy8A:9 a=QEXdDO2ut3YA:10 a=TjNXssC_j7lpFel5tvFf:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-12_07,2025-05-09_01,2025-02-21_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 phishscore=0 lowpriorityscore=0 mlxlogscore=999 malwarescore=0 clxscore=1011 impostorscore=0 mlxscore=0 spamscore=0 bulkscore=0 adultscore=0 priorityscore=1501 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2504070000 definitions=main-2505130094 QM (queue management) configurations decide the length of PPE queues and the queue depth for these queues which are used to drop packets in events of congestion. There are two types of PPE queues - unicast queues (0-255) and multicast queues (256-299). These queue types are used to forward different types of traffic, and are configured with different lengths. Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/ppe/ppe_config.c | 184 ++++++++++++++++++++++++- drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 85 ++++++++++++ 2 files changed, 268 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c index 53efa2d4204e..6603091384ab 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c @@ -43,6 +43,29 @@ struct ppe_bm_port_config { bool dynamic; }; +/** + * struct ppe_qm_queue_config - PPE queue config. + * @queue_start: PPE start of queue ID. + * @queue_end: PPE end of queue ID. + * @prealloc_buf: Queue dedicated buffer number. + * @ceil: Ceil to start drop packet from queue. + * @weight: Weight value. + * @resume_offset: Resume offset from the threshold. + * @dynamic: Threshold value is decided dynamically or statically. + * + * Queue configuration decides the threshold to drop packet from PPE + * hardware queue. + */ +struct ppe_qm_queue_config { + unsigned int queue_start; + unsigned int queue_end; + unsigned int prealloc_buf; + unsigned int ceil; + unsigned int weight; + unsigned int resume_offset; + bool dynamic; +}; + /* There are total 2048 buffers available in PPE, out of which some * buffers are reserved for some specific purposes per PPE port. The * rest of the pool of 1550 buffers are assigned to the general 'group0' @@ -106,6 +129,40 @@ static const struct ppe_bm_port_config ipq9574_ppe_bm_port_config[] = { }, }; +/* QM fetches the packet from PPE buffer management for transmitting the + * packet out. The QM group configuration limits the total number of buffers + * enqueued by all PPE hardware queues. + * There are total 2048 buffers available, out of which some buffers are + * dedicated to hardware exception handlers. The remaining buffers are + * assigned to the general 'group0', which is the group assigned to all + * queues by default. + */ +static const int ipq9574_ppe_qm_group_config = 2000; + +/* Default QM settings for unicast and multicast queues for IPQ9754. */ +static const struct ppe_qm_queue_config ipq9574_ppe_qm_queue_config[] = { + { + /* QM settings for unicast queues 0 to 255. */ + .queue_start = 0, + .queue_end = 255, + .prealloc_buf = 0, + .ceil = 1200, + .weight = 7, + .resume_offset = 36, + .dynamic = true, + }, + { + /* QM settings for multicast queues 256 to 299. */ + .queue_start = 256, + .queue_end = 299, + .prealloc_buf = 0, + .ceil = 250, + .weight = 0, + .resume_offset = 36, + .dynamic = false, + }, +}; + static int ppe_config_bm_threshold(struct ppe_device *ppe_dev, int bm_port_id, const struct ppe_bm_port_config port_cfg) { @@ -193,7 +250,132 @@ static int ppe_config_bm(struct ppe_device *ppe_dev) return ret; } +/* Configure PPE hardware queue depth, which is decided by the threshold + * of queue. + */ +static int ppe_config_qm(struct ppe_device *ppe_dev) +{ + const struct ppe_qm_queue_config *queue_cfg; + int ret, i, queue_id, queue_cfg_count; + u32 reg, multicast_queue_cfg[5]; + u32 unicast_queue_cfg[4]; + u32 group_cfg[3]; + + /* Assign the buffer number to the group 0 by default. */ + reg = PPE_AC_GRP_CFG_TBL_ADDR; + ret = regmap_bulk_read(ppe_dev->regmap, reg, + group_cfg, ARRAY_SIZE(group_cfg)); + if (ret) + goto qm_config_fail; + + PPE_AC_GRP_SET_BUF_LIMIT(group_cfg, ipq9574_ppe_qm_group_config); + + ret = regmap_bulk_write(ppe_dev->regmap, reg, + group_cfg, ARRAY_SIZE(group_cfg)); + if (ret) + goto qm_config_fail; + + queue_cfg = ipq9574_ppe_qm_queue_config; + queue_cfg_count = ARRAY_SIZE(ipq9574_ppe_qm_queue_config); + for (i = 0; i < queue_cfg_count; i++) { + queue_id = queue_cfg[i].queue_start; + + /* Configure threshold for dropping packets separately for + * unicast and multicast PPE queues. + */ + while (queue_id <= queue_cfg[i].queue_end) { + if (queue_id < PPE_AC_UNICAST_QUEUE_CFG_TBL_ENTRIES) { + reg = PPE_AC_UNICAST_QUEUE_CFG_TBL_ADDR + + PPE_AC_UNICAST_QUEUE_CFG_TBL_INC * queue_id; + + ret = regmap_bulk_read(ppe_dev->regmap, reg, + unicast_queue_cfg, + ARRAY_SIZE(unicast_queue_cfg)); + if (ret) + goto qm_config_fail; + + PPE_AC_UNICAST_QUEUE_SET_EN(unicast_queue_cfg, true); + PPE_AC_UNICAST_QUEUE_SET_GRP_ID(unicast_queue_cfg, 0); + PPE_AC_UNICAST_QUEUE_SET_PRE_LIMIT(unicast_queue_cfg, + queue_cfg[i].prealloc_buf); + PPE_AC_UNICAST_QUEUE_SET_DYNAMIC(unicast_queue_cfg, + queue_cfg[i].dynamic); + PPE_AC_UNICAST_QUEUE_SET_WEIGHT(unicast_queue_cfg, + queue_cfg[i].weight); + PPE_AC_UNICAST_QUEUE_SET_THRESHOLD(unicast_queue_cfg, + queue_cfg[i].ceil); + PPE_AC_UNICAST_QUEUE_SET_GRN_RESUME(unicast_queue_cfg, + queue_cfg[i].resume_offset); + + ret = regmap_bulk_write(ppe_dev->regmap, reg, + unicast_queue_cfg, + ARRAY_SIZE(unicast_queue_cfg)); + if (ret) + goto qm_config_fail; + } else { + reg = PPE_AC_MULTICAST_QUEUE_CFG_TBL_ADDR + + PPE_AC_MULTICAST_QUEUE_CFG_TBL_INC * queue_id; + + ret = regmap_bulk_read(ppe_dev->regmap, reg, + multicast_queue_cfg, + ARRAY_SIZE(multicast_queue_cfg)); + if (ret) + goto qm_config_fail; + + PPE_AC_MULTICAST_QUEUE_SET_EN(multicast_queue_cfg, true); + PPE_AC_MULTICAST_QUEUE_SET_GRN_GRP_ID(multicast_queue_cfg, 0); + PPE_AC_MULTICAST_QUEUE_SET_GRN_PRE_LIMIT(multicast_queue_cfg, + queue_cfg[i].prealloc_buf); + PPE_AC_MULTICAST_QUEUE_SET_GRN_THRESHOLD(multicast_queue_cfg, + queue_cfg[i].ceil); + PPE_AC_MULTICAST_QUEUE_SET_GRN_RESUME(multicast_queue_cfg, + queue_cfg[i].resume_offset); + + ret = regmap_bulk_write(ppe_dev->regmap, reg, + multicast_queue_cfg, + ARRAY_SIZE(multicast_queue_cfg)); + if (ret) + goto qm_config_fail; + } + + /* Enable enqueue. */ + reg = PPE_ENQ_OPR_TBL_ADDR + PPE_ENQ_OPR_TBL_INC * queue_id; + ret = regmap_clear_bits(ppe_dev->regmap, reg, + PPE_ENQ_OPR_TBL_ENQ_DISABLE); + if (ret) + goto qm_config_fail; + + /* Enable dequeue. */ + reg = PPE_DEQ_OPR_TBL_ADDR + PPE_DEQ_OPR_TBL_INC * queue_id; + ret = regmap_clear_bits(ppe_dev->regmap, reg, + PPE_DEQ_OPR_TBL_DEQ_DISABLE); + if (ret) + goto qm_config_fail; + + queue_id++; + } + } + + /* Enable queue counter for all PPE hardware queues. */ + ret = regmap_set_bits(ppe_dev->regmap, PPE_EG_BRIDGE_CONFIG_ADDR, + PPE_EG_BRIDGE_CONFIG_QUEUE_CNT_EN); + if (ret) + goto qm_config_fail; + + return 0; + +qm_config_fail: + dev_err(ppe_dev->dev, "PPE QM config error %d\n", ret); + return ret; +} + int ppe_hw_config(struct ppe_device *ppe_dev) { - return ppe_config_bm(ppe_dev); + int ret; + + ret = ppe_config_bm(ppe_dev); + if (ret) + return ret; + + return ppe_config_qm(ppe_dev); } diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h index 3f982c6f42fa..692ea7b71dfc 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h @@ -9,6 +9,16 @@ #include +/* PPE queue counters enable/disable control. */ +#define PPE_EG_BRIDGE_CONFIG_ADDR 0x20044 +#define PPE_EG_BRIDGE_CONFIG_QUEUE_CNT_EN BIT(2) + +/* Table addresses for per-queue dequeue setting. */ +#define PPE_DEQ_OPR_TBL_ADDR 0x430000 +#define PPE_DEQ_OPR_TBL_ENTRIES 300 +#define PPE_DEQ_OPR_TBL_INC 0x10 +#define PPE_DEQ_OPR_TBL_DEQ_DISABLE BIT(0) + /* There are 15 BM ports and 4 BM groups supported by PPE. * BM port (0-7) is for EDMA port 0, BM port (8-13) is for * PPE physical port 1-6 and BM port 14 is for EIP port. @@ -56,4 +66,79 @@ FIELD_MODIFY(PPE_BM_PORT_FC_W1_DYNAMIC, (tbl_cfg) + 0x1, value) #define PPE_BM_PORT_FC_SET_PRE_ALLOC(tbl_cfg, value) \ FIELD_MODIFY(PPE_BM_PORT_FC_W1_PRE_ALLOC, (tbl_cfg) + 0x1, value) + +/* PPE unicast queue (0-255) configurations. */ +#define PPE_AC_UNICAST_QUEUE_CFG_TBL_ADDR 0x848000 +#define PPE_AC_UNICAST_QUEUE_CFG_TBL_ENTRIES 256 +#define PPE_AC_UNICAST_QUEUE_CFG_TBL_INC 0x10 +#define PPE_AC_UNICAST_QUEUE_CFG_W0_EN BIT(0) +#define PPE_AC_UNICAST_QUEUE_CFG_W0_WRED_EN BIT(1) +#define PPE_AC_UNICAST_QUEUE_CFG_W0_FC_EN BIT(2) +#define PPE_AC_UNICAST_QUEUE_CFG_W0_CLR_AWARE BIT(3) +#define PPE_AC_UNICAST_QUEUE_CFG_W0_GRP_ID GENMASK(5, 4) +#define PPE_AC_UNICAST_QUEUE_CFG_W0_PRE_LIMIT GENMASK(16, 6) +#define PPE_AC_UNICAST_QUEUE_CFG_W0_DYNAMIC BIT(17) +#define PPE_AC_UNICAST_QUEUE_CFG_W0_WEIGHT GENMASK(20, 18) +#define PPE_AC_UNICAST_QUEUE_CFG_W0_THRESHOLD GENMASK(31, 21) +#define PPE_AC_UNICAST_QUEUE_CFG_W3_GRN_RESUME GENMASK(23, 13) + +#define PPE_AC_UNICAST_QUEUE_SET_EN(tbl_cfg, value) \ + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W0_EN, tbl_cfg, value) +#define PPE_AC_UNICAST_QUEUE_SET_GRP_ID(tbl_cfg, value) \ + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W0_GRP_ID, tbl_cfg, value) +#define PPE_AC_UNICAST_QUEUE_SET_PRE_LIMIT(tbl_cfg, value) \ + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W0_PRE_LIMIT, tbl_cfg, value) +#define PPE_AC_UNICAST_QUEUE_SET_DYNAMIC(tbl_cfg, value) \ + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W0_DYNAMIC, tbl_cfg, value) +#define PPE_AC_UNICAST_QUEUE_SET_WEIGHT(tbl_cfg, value) \ + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W0_WEIGHT, tbl_cfg, value) +#define PPE_AC_UNICAST_QUEUE_SET_THRESHOLD(tbl_cfg, value) \ + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W0_THRESHOLD, tbl_cfg, value) +#define PPE_AC_UNICAST_QUEUE_SET_GRN_RESUME(tbl_cfg, value) \ + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W3_GRN_RESUME, (tbl_cfg) + 0x3, value) + +/* PPE multicast queue (256-299) configurations. */ +#define PPE_AC_MULTICAST_QUEUE_CFG_TBL_ADDR 0x84a000 +#define PPE_AC_MULTICAST_QUEUE_CFG_TBL_ENTRIES 44 +#define PPE_AC_MULTICAST_QUEUE_CFG_TBL_INC 0x10 +#define PPE_AC_MULTICAST_QUEUE_CFG_W0_EN BIT(0) +#define PPE_AC_MULTICAST_QUEUE_CFG_W0_FC_EN BIT(1) +#define PPE_AC_MULTICAST_QUEUE_CFG_W0_CLR_AWARE BIT(2) +#define PPE_AC_MULTICAST_QUEUE_CFG_W0_GRP_ID GENMASK(4, 3) +#define PPE_AC_MULTICAST_QUEUE_CFG_W0_PRE_LIMIT GENMASK(15, 5) +#define PPE_AC_MULTICAST_QUEUE_CFG_W0_THRESHOLD GENMASK(26, 16) +#define PPE_AC_MULTICAST_QUEUE_CFG_W2_RESUME GENMASK(17, 7) + +#define PPE_AC_MULTICAST_QUEUE_SET_EN(tbl_cfg, value) \ + FIELD_MODIFY(PPE_AC_MULTICAST_QUEUE_CFG_W0_EN, tbl_cfg, value) +#define PPE_AC_MULTICAST_QUEUE_SET_GRN_GRP_ID(tbl_cfg, value) \ + FIELD_MODIFY(PPE_AC_MULTICAST_QUEUE_CFG_W0_GRP_ID, tbl_cfg, value) +#define PPE_AC_MULTICAST_QUEUE_SET_GRN_PRE_LIMIT(tbl_cfg, value) \ + FIELD_MODIFY(PPE_AC_MULTICAST_QUEUE_CFG_W0_PRE_LIMIT, tbl_cfg, value) +#define PPE_AC_MULTICAST_QUEUE_SET_GRN_THRESHOLD(tbl_cfg, value) \ + FIELD_MODIFY(PPE_AC_MULTICAST_QUEUE_CFG_W0_THRESHOLD, tbl_cfg, value) +#define PPE_AC_MULTICAST_QUEUE_SET_GRN_RESUME(tbl_cfg, value) \ + FIELD_MODIFY(PPE_AC_MULTICAST_QUEUE_CFG_W2_RESUME, (tbl_cfg) + 0x2, value) + +/* PPE admission control group (0-3) configurations */ +#define PPE_AC_GRP_CFG_TBL_ADDR 0x84c000 +#define PPE_AC_GRP_CFG_TBL_ENTRIES 0x4 +#define PPE_AC_GRP_CFG_TBL_INC 0x10 +#define PPE_AC_GRP_W0_AC_EN BIT(0) +#define PPE_AC_GRP_W0_AC_FC_EN BIT(1) +#define PPE_AC_GRP_W0_CLR_AWARE BIT(2) +#define PPE_AC_GRP_W0_THRESHOLD_LOW GENMASK(31, 25) +#define PPE_AC_GRP_W1_THRESHOLD_HIGH GENMASK(3, 0) +#define PPE_AC_GRP_W1_BUF_LIMIT GENMASK(14, 4) +#define PPE_AC_GRP_W2_RESUME_GRN GENMASK(15, 5) +#define PPE_AC_GRP_W2_PRE_ALLOC GENMASK(26, 16) + +#define PPE_AC_GRP_SET_BUF_LIMIT(tbl_cfg, value) \ + FIELD_MODIFY(PPE_AC_GRP_W1_BUF_LIMIT, (tbl_cfg) + 0x1, value) + +/* Table addresses for per-queue enqueue setting. */ +#define PPE_ENQ_OPR_TBL_ADDR 0x85c000 +#define PPE_ENQ_OPR_TBL_ENTRIES 300 +#define PPE_ENQ_OPR_TBL_INC 0x10 +#define PPE_ENQ_OPR_TBL_ENQ_DISABLE BIT(0) #endif From patchwork Tue May 13 09:58:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luo Jie X-Patchwork-Id: 889667 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 703AE2459F9; Tue, 13 May 2025 09:59:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130372; cv=none; b=cXI3VxhaYSyEnwtuvcSgTa7owAyUMM7EWlmb9plV/EX/mbRNsfHBRe5NZMEoiVmTFUUjDHftdCrVlVzfI54qNcgjZ9n7nl7TJXQ9s0ky7cA8ppAPnZW0zC/IwfDEMeglxGyrueL4+ndmdD0AhXnxnGSOLeFfsZM5OtWLgO3+dpw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130372; c=relaxed/simple; bh=yzMSitsqfhhknfUJ11NwuquorZc0f5v+3569fGjuXcc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=tyYBmjHsspjgN9CsIKG/eczG4f2irwEiuxcpAOrWMUV9xp3jgWUS2p5EALrZ/9tTlotx72c5xflffZo8LEbjV+8DYsvLh8LmBU4EC0FF4stkY87Od0IKVL38LnuoRTmgbKNJwzhqEqiiw2oVcSHNN7W5++53IRUg1nyAw2hm0Ws= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=K1t4lj9U; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="K1t4lj9U" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54D71P7Y023824; Tue, 13 May 2025 09:59:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= bd8hLR8X9VciYwzp64B8QPqIye2Otnc6P8Z1EViygaU=; b=K1t4lj9U+FPGHUkO blYsPtZPAgi2EhZuo6KAgSDXTQbzE4o8z3l58i03x9muxSVtiQR+iiw0GP4+ijPB 4wK6RPwjhmT5XAM1Ad93BNKezK51PTUH+XjzurUwTRaT5Q3g6jzWSG7iGBmR+3by fNEKkB8yW9Pfal9q3nBbCC9bzRvaXzLmo/I8z412bMMn/NneIhvrzKT4K2sI61Cx 08ZzCPGDMFo4Ft/rM8SgOpNhMycGJJq2iHNqyZdNSkGrWZyf89BlEarmO1+VX7Sy LX4z2W1cEQC+8lWSQdNej+Ap/+wgrFMci3iabQEcJyO4/a6PIXEuRKrLtI8aZhPg VqDrRQ== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 46hx4k7adn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:17 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 54D9xFfu017795 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:15 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 13 May 2025 02:59:09 -0700 From: Luo Jie Date: Tue, 13 May 2025 17:58:27 +0800 Subject: [PATCH net-next v4 07/14] net: ethernet: qualcomm: Initialize PPE queue settings Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250513-qcom_ipq_ppe-v4-7-4fbe40cbbb71@quicinc.com> References: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> In-Reply-To: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747130311; l=16401; i=quic_luoj@quicinc.com; s=20250209; h=from:subject:message-id; bh=yzMSitsqfhhknfUJ11NwuquorZc0f5v+3569fGjuXcc=; b=DBWlwZw9Zp2i7npfCjEMwyVjTUqJXseX+RITSjKQ8s3PKZxV1E0M0TZAGrvemifY7sHhsnxfa RXHDb7jwCH9BA097kqEF7aBW7tCNEC8P+SuWQwI7oOHzamo5jf2IpKN X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=pzwy8bU5tJZ5UKGTv28n+QOuktaWuriznGmriA9Qkfc= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Authority-Analysis: v=2.4 cv=ReqQC0tv c=1 sm=1 tr=0 ts=682317f5 cx=c_pps a=JYp8KDb2vCoCEuGobkYCKw==:117 a=JYp8KDb2vCoCEuGobkYCKw==:17 a=GEpy-HfZoHoA:10 a=IkcTkHD0fZMA:10 a=dt9VzEwgFbYA:10 a=COk6AnOGAAAA:8 a=PNRkUbp5dbSH4Vkeo0cA:9 a=CNskAabLYMlJ57OS:21 a=QEXdDO2ut3YA:10 a=TjNXssC_j7lpFel5tvFf:22 X-Proofpoint-GUID: TgJBpECQ619BXv38SP-TTtfef7U8IliY X-Proofpoint-ORIG-GUID: TgJBpECQ619BXv38SP-TTtfef7U8IliY X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTEzMDA5NCBTYWx0ZWRfXxHB25Rcwy4NE w7kprKz7w2vmwqtp8xSkf6k2LZLCm8VayXM2WmelLXaEPhxSOoD61M2QByStg0y/X3htJo/VEFF lMNt3VeVJL+MUEhcan3xlJYXPjdUMkNk+4LiTpJJ8VXrynn6ThT6WXy8vlsDd9zSX+YXRjqWVwI A6DTn7Eg7DlbiTIskq05Ga/LqxJSb7zTcVkX/800pPkWPg+7RuFsDq1dfMYUNJCQdIV+kz/THeX N1r3mN6oKJhQenTyDJ10RM9JEQ/CMzbimk/Uca0xRnu9IblzunuU6BZ/5bU50VRoI5JkXG/uovu PMjnTjCNUI9vDsy5ZPH1I4ymy7be+go1N6U0K1ZRwgqQniAJORhVlvSpZ3zGgKfAXPLWOReWwRA LZN7pOfFhaAcKDpz09olO4xya7/OfTIrmFaH635JoUetERjx+0njUCRFuusyBE6cw8ZsTb2p X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-12_07,2025-05-09_01,2025-02-21_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 mlxscore=0 mlxlogscore=999 phishscore=0 suspectscore=0 spamscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 clxscore=1015 adultscore=0 malwarescore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2504070000 definitions=main-2505130094 Configure unicast and multicast hardware queues for the PPE ports to enable packet forwarding between the ports. Each PPE port is assigned with a range of queues. The queue ID selection for a packet is decided by the queue base and queue offset that is configured based on the internal priority and the RSS hash value of the packet. Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/ppe/ppe_config.c | 356 ++++++++++++++++++++++++- drivers/net/ethernet/qualcomm/ppe/ppe_config.h | 63 +++++ drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 21 ++ 3 files changed, 439 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c index fe2d44ab59cb..e7b3921e85ec 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c @@ -138,6 +138,34 @@ struct ppe_scheduler_port_config { unsigned int drr_node_id; }; +/** + * struct ppe_port_schedule_resource - PPE port scheduler resource. + * @ucastq_start: Unicast queue start ID. + * @ucastq_end: Unicast queue end ID. + * @mcastq_start: Multicast queue start ID. + * @mcastq_end: Multicast queue end ID. + * @flow_id_start: Flow start ID. + * @flow_id_end: Flow end ID. + * @l0node_start: Scheduler node start ID for queue level. + * @l0node_end: Scheduler node end ID for queue level. + * @l1node_start: Scheduler node start ID for flow level. + * @l1node_end: Scheduler node end ID for flow level. + * + * PPE scheduler resource allocated among the PPE ports. + */ +struct ppe_port_schedule_resource { + unsigned int ucastq_start; + unsigned int ucastq_end; + unsigned int mcastq_start; + unsigned int mcastq_end; + unsigned int flow_id_start; + unsigned int flow_id_end; + unsigned int l0node_start; + unsigned int l0node_end; + unsigned int l1node_start; + unsigned int l1node_end; +}; + /* There are total 2048 buffers available in PPE, out of which some * buffers are reserved for some specific purposes per PPE port. The * rest of the pool of 1550 buffers are assigned to the general 'group0' @@ -701,6 +729,111 @@ static const struct ppe_scheduler_port_config ppe_port_sch_config[] = { }, }; +/* The scheduler resource is applied to each PPE port, The resource + * includes the unicast & multicast queues, flow nodes and DRR nodes. + */ +static const struct ppe_port_schedule_resource ppe_scheduler_res[] = { + { .ucastq_start = 0, + .ucastq_end = 63, + .mcastq_start = 256, + .mcastq_end = 271, + .flow_id_start = 0, + .flow_id_end = 0, + .l0node_start = 0, + .l0node_end = 7, + .l1node_start = 0, + .l1node_end = 0, + }, + { .ucastq_start = 144, + .ucastq_end = 159, + .mcastq_start = 272, + .mcastq_end = 275, + .flow_id_start = 36, + .flow_id_end = 39, + .l0node_start = 48, + .l0node_end = 63, + .l1node_start = 8, + .l1node_end = 11, + }, + { .ucastq_start = 160, + .ucastq_end = 175, + .mcastq_start = 276, + .mcastq_end = 279, + .flow_id_start = 40, + .flow_id_end = 43, + .l0node_start = 64, + .l0node_end = 79, + .l1node_start = 12, + .l1node_end = 15, + }, + { .ucastq_start = 176, + .ucastq_end = 191, + .mcastq_start = 280, + .mcastq_end = 283, + .flow_id_start = 44, + .flow_id_end = 47, + .l0node_start = 80, + .l0node_end = 95, + .l1node_start = 16, + .l1node_end = 19, + }, + { .ucastq_start = 192, + .ucastq_end = 207, + .mcastq_start = 284, + .mcastq_end = 287, + .flow_id_start = 48, + .flow_id_end = 51, + .l0node_start = 96, + .l0node_end = 111, + .l1node_start = 20, + .l1node_end = 23, + }, + { .ucastq_start = 208, + .ucastq_end = 223, + .mcastq_start = 288, + .mcastq_end = 291, + .flow_id_start = 52, + .flow_id_end = 55, + .l0node_start = 112, + .l0node_end = 127, + .l1node_start = 24, + .l1node_end = 27, + }, + { .ucastq_start = 224, + .ucastq_end = 239, + .mcastq_start = 292, + .mcastq_end = 295, + .flow_id_start = 56, + .flow_id_end = 59, + .l0node_start = 128, + .l0node_end = 143, + .l1node_start = 28, + .l1node_end = 31, + }, + { .ucastq_start = 240, + .ucastq_end = 255, + .mcastq_start = 296, + .mcastq_end = 299, + .flow_id_start = 60, + .flow_id_end = 63, + .l0node_start = 144, + .l0node_end = 159, + .l1node_start = 32, + .l1node_end = 35, + }, + { .ucastq_start = 64, + .ucastq_end = 143, + .mcastq_start = 0, + .mcastq_end = 0, + .flow_id_start = 1, + .flow_id_end = 35, + .l0node_start = 8, + .l0node_end = 47, + .l1node_start = 1, + .l1node_end = 7, + }, +}; + /* Set the PPE queue level scheduler configuration. */ static int ppe_scheduler_l0_queue_map_set(struct ppe_device *ppe_dev, int node_id, int port, @@ -832,6 +965,149 @@ int ppe_queue_scheduler_set(struct ppe_device *ppe_dev, port, scheduler_cfg); } +/** + * ppe_queue_ucast_base_set - Set PPE unicast queue base ID and profile ID + * @ppe_dev: PPE device + * @queue_dst: PPE queue destination configuration + * @queue_base: PPE queue base ID + * @profile_id: Profile ID + * + * The PPE unicast queue base ID and profile ID are configured based on the + * destination port information that can be service code or CPU code or the + * destination port. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_queue_ucast_base_set(struct ppe_device *ppe_dev, + struct ppe_queue_ucast_dest queue_dst, + int queue_base, int profile_id) +{ + int index, profile_size; + u32 val, reg; + + profile_size = queue_dst.src_profile << 8; + if (queue_dst.service_code_en) + index = PPE_QUEUE_BASE_SERVICE_CODE + profile_size + + queue_dst.service_code; + else if (queue_dst.cpu_code_en) + index = PPE_QUEUE_BASE_CPU_CODE + profile_size + + queue_dst.cpu_code; + else + index = profile_size + queue_dst.dest_port; + + val = FIELD_PREP(PPE_UCAST_QUEUE_MAP_TBL_PROFILE_ID, profile_id); + val |= FIELD_PREP(PPE_UCAST_QUEUE_MAP_TBL_QUEUE_ID, queue_base); + reg = PPE_UCAST_QUEUE_MAP_TBL_ADDR + index * PPE_UCAST_QUEUE_MAP_TBL_INC; + + return regmap_write(ppe_dev->regmap, reg, val); +} + +/** + * ppe_queue_ucast_offset_pri_set - Set PPE unicast queue offset based on priority + * @ppe_dev: PPE device + * @profile_id: Profile ID + * @priority: PPE internal priority to be used to set queue offset + * @queue_offset: Queue offset used for calculating the destination queue ID + * + * The PPE unicast queue offset is configured based on the PPE + * internal priority. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_queue_ucast_offset_pri_set(struct ppe_device *ppe_dev, + int profile_id, + int priority, + int queue_offset) +{ + u32 val, reg; + int index; + + index = (profile_id << 4) + priority; + val = FIELD_PREP(PPE_UCAST_PRIORITY_MAP_TBL_CLASS, queue_offset); + reg = PPE_UCAST_PRIORITY_MAP_TBL_ADDR + index * PPE_UCAST_PRIORITY_MAP_TBL_INC; + + return regmap_write(ppe_dev->regmap, reg, val); +} + +/** + * ppe_queue_ucast_offset_hash_set - Set PPE unicast queue offset based on hash + * @ppe_dev: PPE device + * @profile_id: Profile ID + * @rss_hash: Packet hash value to be used to set queue offset + * @queue_offset: Queue offset used for calculating the destination queue ID + * + * The PPE unicast queue offset is configured based on the RSS hash value. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_queue_ucast_offset_hash_set(struct ppe_device *ppe_dev, + int profile_id, + int rss_hash, + int queue_offset) +{ + u32 val, reg; + int index; + + index = (profile_id << 8) + rss_hash; + val = FIELD_PREP(PPE_UCAST_HASH_MAP_TBL_HASH, queue_offset); + reg = PPE_UCAST_HASH_MAP_TBL_ADDR + index * PPE_UCAST_HASH_MAP_TBL_INC; + + return regmap_write(ppe_dev->regmap, reg, val); +} + +/** + * ppe_port_resource_get - Get PPE resource per port + * @ppe_dev: PPE device + * @port: PPE port + * @type: Resource type + * @res_start: Resource start ID returned + * @res_end: Resource end ID returned + * + * PPE resource is assigned per PPE port, which is acquired for QoS scheduler. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_port_resource_get(struct ppe_device *ppe_dev, int port, + enum ppe_resource_type type, + int *res_start, int *res_end) +{ + struct ppe_port_schedule_resource res; + + /* The reserved resource with the maximum port ID of PPE is + * also allowed to be acquired. + */ + if (port > ppe_dev->num_ports) + return -EINVAL; + + res = ppe_scheduler_res[port]; + switch (type) { + case PPE_RES_UCAST: + *res_start = res.ucastq_start; + *res_end = res.ucastq_end; + break; + case PPE_RES_MCAST: + *res_start = res.mcastq_start; + *res_end = res.mcastq_end; + break; + case PPE_RES_FLOW_ID: + *res_start = res.flow_id_start; + *res_end = res.flow_id_end; + break; + case PPE_RES_L0_NODE: + *res_start = res.l0node_start; + *res_end = res.l0node_end; + break; + case PPE_RES_L1_NODE: + *res_start = res.l1node_start; + *res_end = res.l1node_end; + break; + default: + return -EINVAL; + } + + return 0; +} + static int ppe_config_bm_threshold(struct ppe_device *ppe_dev, int bm_port_id, const struct ppe_bm_port_config port_cfg) { @@ -1167,6 +1443,80 @@ static int ppe_config_scheduler(struct ppe_device *ppe_dev) return ret; }; +/* Configure PPE queue destination of each PPE port. */ +static int ppe_queue_dest_init(struct ppe_device *ppe_dev) +{ + int ret, port_id, index, q_base, q_offset, res_start, res_end, pri_max; + struct ppe_queue_ucast_dest queue_dst; + + for (port_id = 0; port_id < ppe_dev->num_ports; port_id++) { + memset(&queue_dst, 0, sizeof(queue_dst)); + + ret = ppe_port_resource_get(ppe_dev, port_id, PPE_RES_UCAST, + &res_start, &res_end); + if (ret) + return ret; + + q_base = res_start; + queue_dst.dest_port = port_id; + + /* Configure queue base ID and profile ID that is same as + * physical port ID. + */ + ret = ppe_queue_ucast_base_set(ppe_dev, queue_dst, + q_base, port_id); + if (ret) + return ret; + + /* Queue priority range supported by each PPE port */ + ret = ppe_port_resource_get(ppe_dev, port_id, PPE_RES_L0_NODE, + &res_start, &res_end); + if (ret) + return ret; + + pri_max = res_end - res_start; + + /* Redirect ARP reply packet with the max priority on CPU port, + * which keeps the ARP reply directed to CPU (CPU code is 101) + * with highest priority queue of EDMA. + */ + if (port_id == 0) { + memset(&queue_dst, 0, sizeof(queue_dst)); + + queue_dst.cpu_code_en = true; + queue_dst.cpu_code = 101; + ret = ppe_queue_ucast_base_set(ppe_dev, queue_dst, + q_base + pri_max, + 0); + if (ret) + return ret; + } + + /* Initialize the queue offset of internal priority. */ + for (index = 0; index < PPE_QUEUE_INTER_PRI_NUM; index++) { + q_offset = index > pri_max ? pri_max : index; + + ret = ppe_queue_ucast_offset_pri_set(ppe_dev, port_id, + index, q_offset); + if (ret) + return ret; + } + + /* Initialize the queue offset of RSS hash as 0 to avoid the + * random hardware value that will lead to the unexpected + * destination queue generated. + */ + for (index = 0; index < PPE_QUEUE_HASH_NUM; index++) { + ret = ppe_queue_ucast_offset_hash_set(ppe_dev, port_id, + index, 0); + if (ret) + return ret; + } + } + + return 0; +} + int ppe_hw_config(struct ppe_device *ppe_dev) { int ret; @@ -1179,5 +1529,9 @@ int ppe_hw_config(struct ppe_device *ppe_dev) if (ret) return ret; - return ppe_config_scheduler(ppe_dev); + ret = ppe_config_scheduler(ppe_dev); + if (ret) + return ret; + + return ppe_queue_dest_init(ppe_dev); } diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h index f28cfe7e1548..6553da34effe 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h @@ -8,6 +8,16 @@ #include "ppe.h" +/* There are different table index ranges for configuring queue base ID of + * the destination port, CPU code and service code. + */ +#define PPE_QUEUE_BASE_DEST_PORT 0 +#define PPE_QUEUE_BASE_CPU_CODE 1024 +#define PPE_QUEUE_BASE_SERVICE_CODE 2048 + +#define PPE_QUEUE_INTER_PRI_NUM 16 +#define PPE_QUEUE_HASH_NUM 256 + /** * enum ppe_scheduler_frame_mode - PPE scheduler frame mode. * @PPE_SCH_WITH_IPG_PREAMBLE_FRAME_CRC: The scheduled frame includes IPG, @@ -42,8 +52,61 @@ struct ppe_scheduler_cfg { enum ppe_scheduler_frame_mode frame_mode; }; +/** + * enum ppe_resource_type - PPE resource type. + * @PPE_RES_UCAST: Unicast queue resource. + * @PPE_RES_MCAST: Multicast queue resource. + * @PPE_RES_L0_NODE: Level 0 for queue based node resource. + * @PPE_RES_L1_NODE: Level 1 for flow based node resource. + * @PPE_RES_FLOW_ID: Flow based node resource. + */ +enum ppe_resource_type { + PPE_RES_UCAST, + PPE_RES_MCAST, + PPE_RES_L0_NODE, + PPE_RES_L1_NODE, + PPE_RES_FLOW_ID, +}; + +/** + * struct ppe_queue_ucast_dest - PPE unicast queue destination. + * @src_profile: Source profile. + * @service_code_en: Enable service code to map the queue base ID. + * @service_code: Service code. + * @cpu_code_en: Enable CPU code to map the queue base ID. + * @cpu_code: CPU code. + * @dest_port: destination port. + * + * PPE egress queue ID is decided by the service code if enabled, otherwise + * by the CPU code if enabled, or by destination port if both service code + * and CPU code are disabled. + */ +struct ppe_queue_ucast_dest { + int src_profile; + bool service_code_en; + int service_code; + bool cpu_code_en; + int cpu_code; + int dest_port; +}; + int ppe_hw_config(struct ppe_device *ppe_dev); int ppe_queue_scheduler_set(struct ppe_device *ppe_dev, int node_id, bool flow_level, int port, struct ppe_scheduler_cfg scheduler_cfg); +int ppe_queue_ucast_base_set(struct ppe_device *ppe_dev, + struct ppe_queue_ucast_dest queue_dst, + int queue_base, + int profile_id); +int ppe_queue_ucast_offset_pri_set(struct ppe_device *ppe_dev, + int profile_id, + int priority, + int queue_offset); +int ppe_queue_ucast_offset_hash_set(struct ppe_device *ppe_dev, + int profile_id, + int rss_hash, + int queue_offset); +int ppe_port_resource_get(struct ppe_device *ppe_dev, int port, + enum ppe_resource_type type, + int *res_start, int *res_end); #endif diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h index a1982fbecee7..5996fd40eb0a 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h @@ -164,6 +164,27 @@ #define PPE_BM_PORT_FC_SET_PRE_ALLOC(tbl_cfg, value) \ FIELD_MODIFY(PPE_BM_PORT_FC_W1_PRE_ALLOC, (tbl_cfg) + 0x1, value) +/* The queue base configurations based on destination port, + * service code or CPU code. + */ +#define PPE_UCAST_QUEUE_MAP_TBL_ADDR 0x810000 +#define PPE_UCAST_QUEUE_MAP_TBL_ENTRIES 3072 +#define PPE_UCAST_QUEUE_MAP_TBL_INC 0x10 +#define PPE_UCAST_QUEUE_MAP_TBL_PROFILE_ID GENMASK(3, 0) +#define PPE_UCAST_QUEUE_MAP_TBL_QUEUE_ID GENMASK(11, 4) + +/* The queue offset configurations based on RSS hash value. */ +#define PPE_UCAST_HASH_MAP_TBL_ADDR 0x830000 +#define PPE_UCAST_HASH_MAP_TBL_ENTRIES 4096 +#define PPE_UCAST_HASH_MAP_TBL_INC 0x10 +#define PPE_UCAST_HASH_MAP_TBL_HASH GENMASK(7, 0) + +/* The queue offset configurations based on PPE internal priority. */ +#define PPE_UCAST_PRIORITY_MAP_TBL_ADDR 0x842000 +#define PPE_UCAST_PRIORITY_MAP_TBL_ENTRIES 256 +#define PPE_UCAST_PRIORITY_MAP_TBL_INC 0x10 +#define PPE_UCAST_PRIORITY_MAP_TBL_CLASS GENMASK(3, 0) + /* PPE unicast queue (0-255) configurations. */ #define PPE_AC_UNICAST_QUEUE_CFG_TBL_ADDR 0x848000 #define PPE_AC_UNICAST_QUEUE_CFG_TBL_ENTRIES 256 From patchwork Tue May 13 09:58:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luo Jie X-Patchwork-Id: 889666 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 345EA248F48; Tue, 13 May 2025 09:59:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130383; cv=none; b=T2+Y7sfiQ2vVTNkBTKszMd/y5GSA4dTNIdmtrotTRwjljhWILd8pVUwg49ok/PjivBN2ZISqbP/MOO/IVcNnQ1Q7MsFhHFjfmF9c+Cpb2SKJybySN1eSHghY/ExzQCEDY5melpJD7q/QtlJnZUpCo8CTIlNHDMQjqaynvrJwHSs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130383; c=relaxed/simple; bh=t33QIBx7tsivEIpMIABoiBmVVr4tRgSUAKeWD0Dy75s=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=rr6mXFzCwwRXfnfkEXVeoQj+G0UGWr9jgQsftjrD1VwG/Uf/W6egGMrsz8njuTm1FFErPoUJMVSQTUz+3TezQsVKIp0FYEk5YKsQLvNu6Ang2S4z/sbNRqnAPrf9bjJYCbkvpt5sAfFFvR3cvocBxrG+uVTNOUvTvdeFKpox8yw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=JqMgFRFk; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="JqMgFRFk" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54D7M9UR026164; Tue, 13 May 2025 09:59:28 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= 4eepTEq3dBUW+PZUnJgh/gYhWbxJhWbJZ04TAWR5Dcg=; b=JqMgFRFk6zT4JIYp F3ug5C8q0VDiNIdmBtIGA8IdoJ4tB1lQLIRYuczUY7zXaZnv9TQc++qjBvZ8cFea GkoEbUxC9KDRn0eNeHkHqYp/7Qa4VEp+S9VFQXsM4WfOZ9nntJ4AdBp7l7q0eSOG Liob2fccp01aYAqDJwox5SfiKtrldCH2rdpTz6qMS2bap8EOBiwuV/Gdo5ZDmycS gvxkWqQ15ZQN95t96C6dtn+G8hE+XShKiiytzU/xSp4XKSrCn5I14KcI1tpm2C/v 9Xw6W0GNOFYkzYPmgaseQ+0RTqTxuAZqcwaM9P2oqZ5R7rGQM3gJ+LgEJSQgMIn+ XoYMYA== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 46hyyny71e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:27 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 54D9xRtM009414 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:27 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 13 May 2025 02:59:21 -0700 From: Luo Jie Date: Tue, 13 May 2025 17:58:29 +0800 Subject: [PATCH net-next v4 09/14] net: ethernet: qualcomm: Initialize PPE port control settings Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250513-qcom_ipq_ppe-v4-9-4fbe40cbbb71@quicinc.com> References: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> In-Reply-To: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747130311; l=8508; i=quic_luoj@quicinc.com; s=20250209; h=from:subject:message-id; bh=t33QIBx7tsivEIpMIABoiBmVVr4tRgSUAKeWD0Dy75s=; b=wzqBTIQjnkXM42xsqeyGJd09GQdXL5/4abUNhPYw1TrxwlZTaXBzy41JvCVZYsmMPf+Ww5JR3 Ct4QuLXx3ydCsEjjioqsPggbP5oqkUTllzfN+DJsRWKxsRgBDve/kdb X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=pzwy8bU5tJZ5UKGTv28n+QOuktaWuriznGmriA9Qkfc= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTEzMDA5NCBTYWx0ZWRfXwHsV82rYJisi Ls1Vj5okXninsdFdn5LNkkwYrW2YR8EQ6YbaAcRN0D4UD+HAShcHUuAGscX1BaX2YKjIGJpeXYG er+kyDd/XrHMKBtjVEgWsTUmCdPnhReqtLPUDRuLfrOXsADio47DdFZA2sjPSmqXWrXeHnsC3yt kyB60c1rsA5A3/65ga06Xi/e4CNaUTA1X37HbckKS3mMqG6j7V2HGC06AHsXFHIiOwpABNkB9m+ CxtSbOg8ahNIg8s+tW9EDZxNcz9QjDrYeKNdaAG52kOoN6l4Qweiyn7pt10L98o4mYBzx1eeohH jYntD6smUhGwuUEO1Bpf5KrbKlqY/JeDRvu+ajjPmkn0Fw9PNlHGf6xz/HONoj0eyrvu4RarDE6 2VNP/v4W5rKx7y2sVXo6YranOT4foCk3+GvwY92gg/RyQnIGXHOcY5gbH8Yc6Mo9wStj9z+Y X-Proofpoint-ORIG-GUID: iz3zVvXsOZrGwWnhHxb_be6IARtxjSuw X-Authority-Analysis: v=2.4 cv=Uo9jN/wB c=1 sm=1 tr=0 ts=682317ff cx=c_pps a=JYp8KDb2vCoCEuGobkYCKw==:117 a=JYp8KDb2vCoCEuGobkYCKw==:17 a=GEpy-HfZoHoA:10 a=IkcTkHD0fZMA:10 a=dt9VzEwgFbYA:10 a=COk6AnOGAAAA:8 a=J0k_80nE0nYP0oceNegA:9 a=QEXdDO2ut3YA:10 a=TjNXssC_j7lpFel5tvFf:22 X-Proofpoint-GUID: iz3zVvXsOZrGwWnhHxb_be6IARtxjSuw X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-12_07,2025-05-09_01,2025-02-21_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 malwarescore=0 phishscore=0 mlxscore=0 adultscore=0 impostorscore=0 suspectscore=0 bulkscore=0 spamscore=0 mlxlogscore=999 lowpriorityscore=0 priorityscore=1501 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2504070000 definitions=main-2505130094 1. Enable port specific counters in PPE. 2. Configure the default action as drop when the packet size is more than the configured MTU of physical port. Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/ppe/ppe_config.c | 86 +++++++++++++++++++++++++- drivers/net/ethernet/qualcomm/ppe/ppe_config.h | 15 +++++ drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 47 ++++++++++++++ 3 files changed, 147 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c index 1fecb6ea927c..dd7a4949f049 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c @@ -1178,6 +1178,44 @@ int ppe_sc_config_set(struct ppe_device *ppe_dev, int sc, struct ppe_sc_cfg cfg) return regmap_write(ppe_dev->regmap, reg, val); } +/** + * ppe_counter_enable_set - Set PPE port counter enabled + * @ppe_dev: PPE device + * @port: PPE port ID + * + * Enable PPE counters on the given port for the unicast packet, multicast + * packet and VLAN packet received and transmitted by PPE. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_counter_enable_set(struct ppe_device *ppe_dev, int port) +{ + u32 reg, mru_mtu_val[3]; + int ret; + + reg = PPE_MRU_MTU_CTRL_TBL_ADDR + PPE_MRU_MTU_CTRL_TBL_INC * port; + ret = regmap_bulk_read(ppe_dev->regmap, reg, + mru_mtu_val, ARRAY_SIZE(mru_mtu_val)); + if (ret) + return ret; + + PPE_MRU_MTU_CTRL_SET_RX_CNT_EN(mru_mtu_val, true); + PPE_MRU_MTU_CTRL_SET_TX_CNT_EN(mru_mtu_val, true); + ret = regmap_bulk_write(ppe_dev->regmap, reg, + mru_mtu_val, ARRAY_SIZE(mru_mtu_val)); + if (ret) + return ret; + + reg = PPE_MC_MTU_CTRL_TBL_ADDR + PPE_MC_MTU_CTRL_TBL_INC * port; + ret = regmap_set_bits(ppe_dev->regmap, reg, PPE_MC_MTU_CTRL_TBL_TX_CNT_EN); + if (ret) + return ret; + + reg = PPE_PORT_EG_VLAN_TBL_ADDR + PPE_PORT_EG_VLAN_TBL_INC * port; + + return regmap_set_bits(ppe_dev->regmap, reg, PPE_PORT_EG_VLAN_TBL_TX_COUNTING_EN); +} + static int ppe_config_bm_threshold(struct ppe_device *ppe_dev, int bm_port_id, const struct ppe_bm_port_config port_cfg) { @@ -1606,6 +1644,48 @@ static int ppe_servcode_init(struct ppe_device *ppe_dev) return ppe_sc_config_set(ppe_dev, PPE_EDMA_SC_BYPASS_ID, sc_cfg); } +/* Initialize PPE port configurations. */ +static int ppe_port_config_init(struct ppe_device *ppe_dev) +{ + u32 reg, val, mru_mtu_val[3]; + int i, ret; + + /* MTU and MRU settings are not required for CPU port 0. */ + for (i = 1; i < ppe_dev->num_ports; i++) { + /* Enable Ethernet port counter */ + ret = ppe_counter_enable_set(ppe_dev, i); + if (ret) + return ret; + + reg = PPE_MRU_MTU_CTRL_TBL_ADDR + PPE_MRU_MTU_CTRL_TBL_INC * i; + ret = regmap_bulk_read(ppe_dev->regmap, reg, + mru_mtu_val, ARRAY_SIZE(mru_mtu_val)); + if (ret) + return ret; + + /* Drop the packet when the packet size is more than + * the MTU or MRU of the physical interface. + */ + PPE_MRU_MTU_CTRL_SET_MRU_CMD(mru_mtu_val, PPE_ACTION_DROP); + PPE_MRU_MTU_CTRL_SET_MTU_CMD(mru_mtu_val, PPE_ACTION_DROP); + ret = regmap_bulk_write(ppe_dev->regmap, reg, + mru_mtu_val, ARRAY_SIZE(mru_mtu_val)); + if (ret) + return ret; + + reg = PPE_MC_MTU_CTRL_TBL_ADDR + PPE_MC_MTU_CTRL_TBL_INC * i; + val = FIELD_PREP(PPE_MC_MTU_CTRL_TBL_MTU_CMD, PPE_ACTION_DROP); + ret = regmap_update_bits(ppe_dev->regmap, reg, + PPE_MC_MTU_CTRL_TBL_MTU_CMD, + val); + if (ret) + return ret; + } + + /* Enable CPU port counters. */ + return ppe_counter_enable_set(ppe_dev, 0); +} + int ppe_hw_config(struct ppe_device *ppe_dev) { int ret; @@ -1626,5 +1706,9 @@ int ppe_hw_config(struct ppe_device *ppe_dev) if (ret) return ret; - return ppe_servcode_init(ppe_dev); + ret = ppe_servcode_init(ppe_dev); + if (ret) + return ret; + + return ppe_port_config_init(ppe_dev); } diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h index 374635009ae3..277a77257b85 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h @@ -233,6 +233,20 @@ struct ppe_sc_cfg { int eip_offset_sel; }; +/** + * enum ppe_action_type - PPE action of the received packet. + * @PPE_ACTION_FORWARD: Packet forwarded per L2/L3 process. + * @PPE_ACTION_DROP: Packet dropped by PPE. + * @PPE_ACTION_COPY_TO_CPU: Packet copied to CPU port per multicast queue. + * @PPE_ACTION_REDIRECT_TO_CPU: Packet redirected to CPU port per unicast queue. + */ +enum ppe_action_type { + PPE_ACTION_FORWARD = 0, + PPE_ACTION_DROP = 1, + PPE_ACTION_COPY_TO_CPU = 2, + PPE_ACTION_REDIRECT_TO_CPU = 3, +}; + int ppe_hw_config(struct ppe_device *ppe_dev); int ppe_queue_scheduler_set(struct ppe_device *ppe_dev, int node_id, bool flow_level, int port, @@ -254,4 +268,5 @@ int ppe_port_resource_get(struct ppe_device *ppe_dev, int port, int *res_start, int *res_end); int ppe_sc_config_set(struct ppe_device *ppe_dev, int sc, struct ppe_sc_cfg cfg); +int ppe_counter_enable_set(struct ppe_device *ppe_dev, int port); #endif diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h index 5d43326ad99b..82716c3d42e9 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h @@ -40,6 +40,18 @@ #define PPE_SERVICE_SET_RX_CNT_EN(tbl_cfg, value) \ FIELD_MODIFY(PPE_SERVICE_W1_RX_CNT_EN, (tbl_cfg) + 0x1, value) +/* PPE port egress VLAN configurations. */ +#define PPE_PORT_EG_VLAN_TBL_ADDR 0x20020 +#define PPE_PORT_EG_VLAN_TBL_ENTRIES 8 +#define PPE_PORT_EG_VLAN_TBL_INC 4 +#define PPE_PORT_EG_VLAN_TBL_VLAN_TYPE BIT(0) +#define PPE_PORT_EG_VLAN_TBL_CTAG_MODE GENMASK(2, 1) +#define PPE_PORT_EG_VLAN_TBL_STAG_MODE GENMASK(4, 3) +#define PPE_PORT_EG_VLAN_TBL_VSI_TAG_MODE_EN BIT(5) +#define PPE_PORT_EG_VLAN_TBL_PCP_PROP_CMD BIT(6) +#define PPE_PORT_EG_VLAN_TBL_DEI_PROP_CMD BIT(7) +#define PPE_PORT_EG_VLAN_TBL_TX_COUNTING_EN BIT(8) + /* PPE queue counters enable/disable control. */ #define PPE_EG_BRIDGE_CONFIG_ADDR 0x20044 #define PPE_EG_BRIDGE_CONFIG_QUEUE_CNT_EN BIT(2) @@ -65,6 +77,41 @@ #define PPE_EG_SERVICE_SET_TX_CNT_EN(tbl_cfg, value) \ FIELD_MODIFY(PPE_EG_SERVICE_W1_TX_CNT_EN, (tbl_cfg) + 0x1, value) +/* PPE port control configurations for the traffic to the multicast queues. */ +#define PPE_MC_MTU_CTRL_TBL_ADDR 0x60a00 +#define PPE_MC_MTU_CTRL_TBL_ENTRIES 8 +#define PPE_MC_MTU_CTRL_TBL_INC 4 +#define PPE_MC_MTU_CTRL_TBL_MTU GENMASK(13, 0) +#define PPE_MC_MTU_CTRL_TBL_MTU_CMD GENMASK(15, 14) +#define PPE_MC_MTU_CTRL_TBL_TX_CNT_EN BIT(16) + +/* PPE port control configurations for the traffic to the unicast queues. */ +#define PPE_MRU_MTU_CTRL_TBL_ADDR 0x65000 +#define PPE_MRU_MTU_CTRL_TBL_ENTRIES 256 +#define PPE_MRU_MTU_CTRL_TBL_INC 0x10 +#define PPE_MRU_MTU_CTRL_W0_MRU GENMASK(13, 0) +#define PPE_MRU_MTU_CTRL_W0_MRU_CMD GENMASK(15, 14) +#define PPE_MRU_MTU_CTRL_W0_MTU GENMASK(29, 16) +#define PPE_MRU_MTU_CTRL_W0_MTU_CMD GENMASK(31, 30) +#define PPE_MRU_MTU_CTRL_W1_RX_CNT_EN BIT(0) +#define PPE_MRU_MTU_CTRL_W1_TX_CNT_EN BIT(1) +#define PPE_MRU_MTU_CTRL_W1_SRC_PROFILE GENMASK(3, 2) +#define PPE_MRU_MTU_CTRL_W1_INNER_PREC_LOW BIT(31) +#define PPE_MRU_MTU_CTRL_W2_INNER_PREC_HIGH GENMASK(1, 0) + +#define PPE_MRU_MTU_CTRL_SET_MRU(tbl_cfg, value) \ + FIELD_MODIFY(PPE_MRU_MTU_CTRL_W0_MRU, tbl_cfg, value) +#define PPE_MRU_MTU_CTRL_SET_MRU_CMD(tbl_cfg, value) \ + FIELD_MODIFY(PPE_MRU_MTU_CTRL_W0_MRU_CMD, tbl_cfg, value) +#define PPE_MRU_MTU_CTRL_SET_MTU(tbl_cfg, value) \ + FIELD_MODIFY(PPE_MRU_MTU_CTRL_W0_MTU, tbl_cfg, value) +#define PPE_MRU_MTU_CTRL_SET_MTU_CMD(tbl_cfg, value) \ + FIELD_MODIFY(PPE_MRU_MTU_CTRL_W0_MTU_CMD, tbl_cfg, value) +#define PPE_MRU_MTU_CTRL_SET_RX_CNT_EN(tbl_cfg, value) \ + FIELD_MODIFY(PPE_MRU_MTU_CTRL_W1_RX_CNT_EN, (tbl_cfg) + 0x1, value) +#define PPE_MRU_MTU_CTRL_SET_TX_CNT_EN(tbl_cfg, value) \ + FIELD_MODIFY(PPE_MRU_MTU_CTRL_W1_TX_CNT_EN, (tbl_cfg) + 0x1, value) + /* PPE service code configuration for destination port and counter. */ #define PPE_IN_L2_SERVICE_TBL_ADDR 0x66000 #define PPE_IN_L2_SERVICE_TBL_ENTRIES 256 From patchwork Tue May 13 09:58:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luo Jie X-Patchwork-Id: 889665 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D69624EF76; Tue, 13 May 2025 09:59:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130394; cv=none; b=fdMwYFu4fjqhOh8N8ZMdsDdxQE2MMT2vYJbAManC/xHh1IU//G/OAApooMzF7J6hO0j0xxkSxXYInEAIwVM9QUYMmQ12WuRO/OrXiN2Qh9TlQpf03Vc38At7ZObsgYaHKiEjvgb4Ornv+obl0WIHBjXo1HQkPRJV09undxInvhk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130394; c=relaxed/simple; bh=BlNJ5t+bUTaBWb6Qh/kBzakrg4DmJ8basTXwW4rJH5w=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=QaXDR2ibaf3bxB/mB1DyqA8CGDkwFnmZKAlgCVc8ZwbX/quFE3ffSL+1NO3jKYsvQo9soZHZCKsV0YbeEVG6A5X3zZZ6rKIMXGje+X256WrN5+1MVOkxJCGPwiBYHrbtqYCBleRh3hdwqfeZ+jls2si3m3kFS2Y9QKzqbOmfFKI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=ldIcZrtj; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="ldIcZrtj" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54D9IRjl028451; Tue, 13 May 2025 09:59:39 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= JC9s/wxtE4UM6Qo1URmWdNgQMlkOE90jvv0j7LFuxls=; b=ldIcZrtjl0Bg49MZ ACIyODZro9idxhUGMFgSN6xzevWZygZedkI0NYc+rw88kcN2SLNdTwOTc+5Ng7M6 GxSqahOg0RprOYcGYdwffpS2zPldMhon0D4335NkNVgwGvZ/pyp8RB/IBtNnm4H3 zP8X6n/6orweOfxJp9ABDlPgkSZESohA14WXqJdboVVnfvDpndwJV1G59ni3eVjf V5x1bf54m1mgGLXKNF6xOgJPfZXOGKa4SGF5k5sBmBI1mJOPQpN9CDb9yhoYll6T kBfAgHvq9V7fATyazzhopUSe5i3/DZ3gZiV9MHYMAzA5K1ykdpaTIhstm6YAShxv WfomkA== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 46hxvxf95e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:39 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 54D9xctO018264 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:38 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 13 May 2025 02:59:32 -0700 From: Luo Jie Date: Tue, 13 May 2025 17:58:31 +0800 Subject: [PATCH net-next v4 11/14] net: ethernet: qualcomm: Initialize PPE queue to Ethernet DMA ring mapping Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250513-qcom_ipq_ppe-v4-11-4fbe40cbbb71@quicinc.com> References: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> In-Reply-To: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747130311; l=4626; i=quic_luoj@quicinc.com; s=20250209; h=from:subject:message-id; bh=BlNJ5t+bUTaBWb6Qh/kBzakrg4DmJ8basTXwW4rJH5w=; b=WfFIlVNDFoxcFsnEPG6SfavkR4V+snN1a+kuJpcrqQUoqCsV/U4kJ3pOzFQ6cUc2Lqw1ynO37 qFS+P62O709CfV4pBLgJmH0GJVeeQ13IDe5/6oJiebwB7AuJPg/Myiy X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=pzwy8bU5tJZ5UKGTv28n+QOuktaWuriznGmriA9Qkfc= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTEzMDA5NCBTYWx0ZWRfXznd+0vBFQZ72 88KPwYm+0yIUWzbkTbgmvtCvyEh+xjqQBVXmFmqJxdcXEABYfSUAH2MOAxN7OuL2AOjLvjM2FZY 0jp1AhnSh+8yGWFwwX+A+hvauV1H869bL75HpLuvwUsAE/8XafIbw0Fw28mP85H+83LwZrAqmDw 6aWKCvcOd6Ov1IJ+5W9cFhmc+51Ui0zGpY3zgKpeD0XxmF90hebOzkRjzq8PvdftzzGXIo83MBE /wrqRbzNzEImx8IJSh0LGZOeMqEZE2Yds7QXSa9a06iJnj1Q77zS9DqUFRLIaJkBecoxYvW5hhr lm3KZjTRtMy9Ce3MK+NzqFvaucXljFrp5qkNXtCtuuMets3+/fOvYsVzYZeFnijs+gi0Q+Cc0Fb B8s3ZfLKalhNMsxI0pB5sgEX/LykkkmZVQyW9QPid4+YT3A6XFqnplunzYdQw3t3NEqlaT0h X-Proofpoint-GUID: hOsfHQ4vBzH7XOGCFVvDz8GO6QoiM7Xf X-Authority-Analysis: v=2.4 cv=WMV/XmsR c=1 sm=1 tr=0 ts=6823180b cx=c_pps a=JYp8KDb2vCoCEuGobkYCKw==:117 a=JYp8KDb2vCoCEuGobkYCKw==:17 a=GEpy-HfZoHoA:10 a=IkcTkHD0fZMA:10 a=dt9VzEwgFbYA:10 a=COk6AnOGAAAA:8 a=r3sHWlaIaOML1aV7Y10A:9 a=QEXdDO2ut3YA:10 a=TjNXssC_j7lpFel5tvFf:22 X-Proofpoint-ORIG-GUID: hOsfHQ4vBzH7XOGCFVvDz8GO6QoiM7Xf X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-12_07,2025-05-09_01,2025-02-21_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 adultscore=0 lowpriorityscore=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 impostorscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2504070000 definitions=main-2505130094 Configure the selected queues to map with an Ethernet DMA ring for the packet to receive on ARM cores. As default initialization, all queues assigned to CPU port 0 are mapped to the EDMA ring 0. This configuration is later updated during Ethernet DMA initialization. Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/ppe/ppe_config.c | 47 +++++++++++++++++++++++++- drivers/net/ethernet/qualcomm/ppe/ppe_config.h | 6 ++++ drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 5 +++ 3 files changed, 57 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c index 3b290eda7633..29d0af091854 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c @@ -1353,6 +1353,28 @@ int ppe_rss_hash_config_set(struct ppe_device *ppe_dev, int mode, return 0; } +/** + * ppe_ring_queue_map_set - Set the PPE queue to Ethernet DMA ring mapping + * @ppe_dev: PPE device + * @ring_id: Ethernet DMA ring ID + * @queue_map: Bit map of queue IDs to given Ethernet DMA ring + * + * Configure the mapping from a set of PPE queues to a given Ethernet DMA ring. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_ring_queue_map_set(struct ppe_device *ppe_dev, int ring_id, u32 *queue_map) +{ + u32 reg, queue_bitmap_val[PPE_RING_TO_QUEUE_BITMAP_WORD_CNT]; + + memcpy(queue_bitmap_val, queue_map, sizeof(queue_bitmap_val)); + reg = PPE_RING_Q_MAP_TBL_ADDR + PPE_RING_Q_MAP_TBL_INC * ring_id; + + return regmap_bulk_write(ppe_dev->regmap, reg, + queue_bitmap_val, + ARRAY_SIZE(queue_bitmap_val)); +} + static int ppe_config_bm_threshold(struct ppe_device *ppe_dev, int bm_port_id, const struct ppe_bm_port_config port_cfg) { @@ -1874,6 +1896,25 @@ static int ppe_rss_hash_init(struct ppe_device *ppe_dev) return ppe_rss_hash_config_set(ppe_dev, PPE_RSS_HASH_MODE_IPV6, hash_cfg); } +/* Initialize mapping between PPE queues assigned to CPU port 0 + * to Ethernet DMA ring 0. + */ +static int ppe_queues_to_ring_init(struct ppe_device *ppe_dev) +{ + u32 queue_bmap[PPE_RING_TO_QUEUE_BITMAP_WORD_CNT] = {}; + int ret, queue_id, queue_max; + + ret = ppe_port_resource_get(ppe_dev, 0, PPE_RES_UCAST, + &queue_id, &queue_max); + if (ret) + return ret; + + for (; queue_id <= queue_max; queue_id++) + queue_bmap[queue_id / 32] |= BIT_MASK(queue_id % 32); + + return ppe_ring_queue_map_set(ppe_dev, 0, queue_bmap); +} + int ppe_hw_config(struct ppe_device *ppe_dev) { int ret; @@ -1902,5 +1943,9 @@ int ppe_hw_config(struct ppe_device *ppe_dev) if (ret) return ret; - return ppe_rss_hash_init(ppe_dev); + ret = ppe_rss_hash_init(ppe_dev); + if (ret) + return ret; + + return ppe_queues_to_ring_init(ppe_dev); } diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h index fedcb9d9602f..6383f399df54 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h @@ -29,6 +29,9 @@ #define PPE_RSS_HASH_IP_LENGTH 4 #define PPE_RSS_HASH_TUPLES 5 +/* PPE supports 300 queues, each bit presents as one queue. */ +#define PPE_RING_TO_QUEUE_BITMAP_WORD_CNT 10 + /** * enum ppe_scheduler_frame_mode - PPE scheduler frame mode. * @PPE_SCH_WITH_IPG_PREAMBLE_FRAME_CRC: The scheduled frame includes IPG, @@ -308,4 +311,7 @@ int ppe_sc_config_set(struct ppe_device *ppe_dev, int sc, int ppe_counter_enable_set(struct ppe_device *ppe_dev, int port); int ppe_rss_hash_config_set(struct ppe_device *ppe_dev, int mode, struct ppe_rss_hash_cfg hash_cfg); +int ppe_ring_queue_map_set(struct ppe_device *ppe_dev, + int ring_id, + u32 *queue_map); #endif diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h index ef1602674ec4..8a89d9aa82ae 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h @@ -207,6 +207,11 @@ #define PPE_L0_COMP_CFG_TBL_SHAPER_METER_LEN GENMASK(1, 0) #define PPE_L0_COMP_CFG_TBL_NODE_METER_LEN GENMASK(3, 2) +/* PPE queue to Ethernet DMA ring mapping table. */ +#define PPE_RING_Q_MAP_TBL_ADDR 0x42a000 +#define PPE_RING_Q_MAP_TBL_ENTRIES 24 +#define PPE_RING_Q_MAP_TBL_INC 0x40 + /* Table addresses for per-queue dequeue setting. */ #define PPE_DEQ_OPR_TBL_ADDR 0x430000 #define PPE_DEQ_OPR_TBL_ENTRIES 300 From patchwork Tue May 13 09:58:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luo Jie X-Patchwork-Id: 889664 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0BE9F242D67; Tue, 13 May 2025 10:00:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130408; cv=none; b=iemxbsiRre1kENNlTsmdWNrRURTSKogdVSaKWf7Nyp8gd+YAVFkvTk3ky8rkn/iJjSDBFFGiQ4ZVHrYu/h6YMr/XD9ozwyXlbwaCsGJ/hzHpR+Mm4kmEMDRDM3s0IrnzFNA90cs+zkXISNsoc7VPPmO95Flu51aLvjbCTOXDMBo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130408; c=relaxed/simple; bh=sGomRYdMGKuzEP0RlW1oGzeIx6GFC7BzrcRN/x2uDBg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=sa3zerCTql6jACilDzJm1uemPbwbsICylKCvGoCWrXR4NoLDIcSCVEp/NQP2HTjxITdQ3JqCZYoTYFTXcB/B6sEI7N3H/CuOkjO8r8dug031xXZNCpF9dusLwnkH9hiSmMl40o6ZUqNmoJdsIOlCHNzIUxnWG/Sz7i/mZK2in0c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=On7GsztG; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="On7GsztG" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54D7Ejpd028406; Tue, 13 May 2025 09:59:51 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= 35traPM/QRw3QSn/I5pQdfQqHV+7vg9T/+aC1a8M0nU=; b=On7GsztGQ5RoZj9T Wpd9gNdPS9dYPAE3n5YM/zgFz6yRD6ndamazCRwQNDwruZuNFRf5DSrCxB1Vh3rH CSflI/If22QOwv5xHS9kSOwodxC35BsRgOzXx4wFXpsoojo3pDi67e2Im/kuPIiE LjgzcSPy4MJ5EiEm5kOLINZVVHWhsLbjNQMdgRRFb2GSgNCEkVKYyKjiRo64RK8M 6oKqX9paCKtztENeKTG2gxlDhHLCZ0xmfirnW5bxbo+5M7gBcYisP4Gt+vPEM62a L2cAFx90oN2moowJvDLPZLzYbYeiBk5qETYtiEnnkRZAcfLNPZvbG5CZX6czu3yJ cib7YA== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 46hwt97adp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:50 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 54D9xncH018821 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:49 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 13 May 2025 02:59:44 -0700 From: Luo Jie Date: Tue, 13 May 2025 17:58:33 +0800 Subject: [PATCH net-next v4 13/14] net: ethernet: qualcomm: Add PPE debugfs support for PPE counters Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250513-qcom_ipq_ppe-v4-13-4fbe40cbbb71@quicinc.com> References: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> In-Reply-To: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747130311; l=32461; i=quic_luoj@quicinc.com; s=20250209; h=from:subject:message-id; bh=sGomRYdMGKuzEP0RlW1oGzeIx6GFC7BzrcRN/x2uDBg=; b=vSm1HVsqWtO3SONocZF39hZ8hK+MAMVFDxhGZ6UWr0y/rRqhfh8X6UPnN5Gjt/1heYmyk70PO gxErZypBxdiDudrPEzf1EtUtUJvyEupMgQFqYVR2SxfpgOu2mxuVUMd X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=pzwy8bU5tJZ5UKGTv28n+QOuktaWuriznGmriA9Qkfc= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTEzMDA5NCBTYWx0ZWRfX0FkCeUVNmAWy 0HPudQ9Um4zngYwkJaKpz/ioxKuBt6LlVEeFfhVhpD7L42uTWDAOitnyq1Uc/wqkrl4Z99/hIck 6LAObkbGkCJcDII+6AhoKkiWGDYR5yntkopRsm6mfUxhC7ZY+56MhWCmowEMW05YtAvPKVph005 dsSwjE7H49TD7kITbh1hq7H2BQ8L6BelmjQjX4OUJGJ/D5MAtchtMARerrTCvc5xRQnAkBLpgYj lGYq7b7y9GUdp7gSm99gffQCsxnYOT57qXoSIYJJnE9Emq/9aeG3idT+PZCkcmR1xmflVKFsLiw OrueGlnfhouiLH4tLKF0AHgljSqpsnnuZiMnL1QttSlbHrU6nqYlhXVkPgkCmCLOEzp7sQt3F69 t8uNbsey/cEWgeLT899zcXE7j4nelgtWIY3Ry7CN8l6jiNCMhq/MHFO7TjodxQpGkirq3aLI X-Proofpoint-ORIG-GUID: B1Xg14nSkduktDueZ3rYXniiJdayI5HL X-Proofpoint-GUID: B1Xg14nSkduktDueZ3rYXniiJdayI5HL X-Authority-Analysis: v=2.4 cv=a58w9VSF c=1 sm=1 tr=0 ts=68231816 cx=c_pps a=JYp8KDb2vCoCEuGobkYCKw==:117 a=JYp8KDb2vCoCEuGobkYCKw==:17 a=GEpy-HfZoHoA:10 a=IkcTkHD0fZMA:10 a=dt9VzEwgFbYA:10 a=COk6AnOGAAAA:8 a=FJ2yD9DyxgNxLu255lQA:9 a=QEXdDO2ut3YA:10 a=TjNXssC_j7lpFel5tvFf:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-12_07,2025-05-09_01,2025-02-21_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 phishscore=0 lowpriorityscore=0 mlxlogscore=999 malwarescore=0 clxscore=1015 impostorscore=0 mlxscore=0 spamscore=0 bulkscore=0 adultscore=0 priorityscore=1501 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2504070000 definitions=main-2505130094 The PPE hardware counters maintain counters for packets handled by the various functional blocks of PPE. They help in tracing the packets passed through PPE and debugging any packet drops. The counters displayed by this debugfs file are ones that are common for all Ethernet ports, and they do not include the counters that are specific for a MAC port. Hence they cannot be displayed using ethtool. The per-MAC counters will be supported using "ethtool -S" along with the netdevice driver. The PPE hardware various type counters are made available through the debugfs files under directory "/sys/kernel/debug/ppe/". Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/ppe/Makefile | 2 +- drivers/net/ethernet/qualcomm/ppe/ppe.c | 11 + drivers/net/ethernet/qualcomm/ppe/ppe.h | 3 + drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c | 814 ++++++++++++++++++++++++ drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h | 16 + drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 99 +++ 6 files changed, 944 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qualcomm/ppe/Makefile b/drivers/net/ethernet/qualcomm/ppe/Makefile index 410a7bb54cfe..9e60b2400c16 100644 --- a/drivers/net/ethernet/qualcomm/ppe/Makefile +++ b/drivers/net/ethernet/qualcomm/ppe/Makefile @@ -4,4 +4,4 @@ # obj-$(CONFIG_QCOM_PPE) += qcom-ppe.o -qcom-ppe-objs := ppe.o ppe_config.o +qcom-ppe-objs := ppe.o ppe_config.o ppe_debugfs.o diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c index 253de6a15466..17f6770c59ae 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe.c +++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c @@ -16,6 +16,7 @@ #include "ppe.h" #include "ppe_config.h" +#include "ppe_debugfs.h" #define PPE_PORT_MAX 8 #define PPE_CLK_RATE 353000000 @@ -199,11 +200,20 @@ static int qcom_ppe_probe(struct platform_device *pdev) if (ret) return dev_err_probe(dev, ret, "PPE HW config failed\n"); + ppe_debugfs_setup(ppe_dev); platform_set_drvdata(pdev, ppe_dev); return 0; } +static void qcom_ppe_remove(struct platform_device *pdev) +{ + struct ppe_device *ppe_dev; + + ppe_dev = platform_get_drvdata(pdev); + ppe_debugfs_teardown(ppe_dev); +} + static const struct of_device_id qcom_ppe_of_match[] = { { .compatible = "qcom,ipq9574-ppe" }, {} @@ -216,6 +226,7 @@ static struct platform_driver qcom_ppe_driver = { .of_match_table = qcom_ppe_of_match, }, .probe = qcom_ppe_probe, + .remove = qcom_ppe_remove, }; module_platform_driver(qcom_ppe_driver); diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.h b/drivers/net/ethernet/qualcomm/ppe/ppe.h index cc6767b7c2b8..e9a208b77459 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe.h @@ -11,6 +11,7 @@ struct device; struct regmap; +struct dentry; /** * struct ppe_device - PPE device private data. @@ -18,6 +19,7 @@ struct regmap; * @regmap: PPE register map. * @clk_rate: PPE clock rate. * @num_ports: Number of PPE ports. + * @debugfs_root: Debugfs root entry. * @num_icc_paths: Number of interconnect paths. * @icc_paths: Interconnect path array. * @@ -30,6 +32,7 @@ struct ppe_device { struct regmap *regmap; unsigned long clk_rate; unsigned int num_ports; + struct dentry *debugfs_root; unsigned int num_icc_paths; struct icc_bulk_data icc_paths[] __counted_by(num_icc_paths); }; diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c b/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c new file mode 100644 index 000000000000..2d79fe5e0275 --- /dev/null +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c @@ -0,0 +1,814 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +/* PPE debugfs routines for display of PPE counters useful for debug. */ + +#include +#include +#include +#include +#include +#include + +#include "ppe.h" +#include "ppe_config.h" +#include "ppe_debugfs.h" +#include "ppe_regs.h" + +#define PPE_PKT_CNT_TBL_SIZE 3 +#define PPE_DROP_PKT_CNT_TBL_SIZE 5 + +#define PPE_W0_PKT_CNT GENMASK(31, 0) +#define PPE_W2_DROP_PKT_CNT_LOW GENMASK(31, 8) +#define PPE_W3_DROP_PKT_CNT_HIGH GENMASK(7, 0) + +#define PPE_GET_PKT_CNT(tbl_cnt) \ + FIELD_GET(PPE_W0_PKT_CNT, *(tbl_cnt)) +#define PPE_GET_DROP_PKT_CNT_LOW(tbl_cnt) \ + FIELD_GET(PPE_W2_DROP_PKT_CNT_LOW, *((tbl_cnt) + 0x2)) +#define PPE_GET_DROP_PKT_CNT_HIGH(tbl_cnt) \ + FIELD_GET(PPE_W3_DROP_PKT_CNT_HIGH, *((tbl_cnt) + 0x3)) + +/** + * enum ppe_cnt_size_type - PPE counter size type + * @PPE_PKT_CNT_SIZE_1WORD: Counter size with single register + * @PPE_PKT_CNT_SIZE_3WORD: Counter size with table of 3 words + * @PPE_PKT_CNT_SIZE_5WORD: Counter size with table of 5 words + * + * PPE takes the different register size to record the packet counters. + * It uses single register, or register table with 3 words or 5 words. + * The counter with table size 5 words also records the drop counter. + * There are also some other counter types occupying sizes less than 32 + * bits, which is not covered by this enumeration type. + */ +enum ppe_cnt_size_type { + PPE_PKT_CNT_SIZE_1WORD, + PPE_PKT_CNT_SIZE_3WORD, + PPE_PKT_CNT_SIZE_5WORD, +}; + +/** + * enum ppe_cnt_type - PPE counter type. + * @PPE_CNT_BM: Packet counter processed by BM. + * @PPE_CNT_PARSE: Packet counter parsed on ingress. + * @PPE_CNT_PORT_RX: Packet counter on the ingress port. + * @PPE_CNT_VLAN_RX: VLAN packet counter received. + * @PPE_CNT_L2_FWD: Packet counter processed by L2 forwarding. + * @PPE_CNT_CPU_CODE: Packet counter marked with various CPU codes. + * @PPE_CNT_VLAN_TX: VLAN packet counter transmitted. + * @PPE_CNT_PORT_TX: Packet counter on the egress port. + * @PPE_CNT_QM: Packet counter processed by QM. + */ +enum ppe_cnt_type { + PPE_CNT_BM, + PPE_CNT_PARSE, + PPE_CNT_PORT_RX, + PPE_CNT_VLAN_RX, + PPE_CNT_L2_FWD, + PPE_CNT_CPU_CODE, + PPE_CNT_VLAN_TX, + PPE_CNT_PORT_TX, + PPE_CNT_QM, +}; + +/** + * struct ppe_debugfs_entry - PPE debugfs entry. + * @name: Debugfs file name. + * @counter_type: PPE packet counter type. + * @ppe: PPE device. + * + * The PPE debugfs entry is used to create the debugfs file and passed + * to debugfs_create_file() as private data. + */ +struct ppe_debugfs_entry { + const char *name; + enum ppe_cnt_type counter_type; + struct ppe_device *ppe; +}; + +static const struct ppe_debugfs_entry debugfs_files[] = { + { + .name = "bm", + .counter_type = PPE_CNT_BM, + }, + { + .name = "parse", + .counter_type = PPE_CNT_PARSE, + }, + { + .name = "port_rx", + .counter_type = PPE_CNT_PORT_RX, + }, + { + .name = "vlan_rx", + .counter_type = PPE_CNT_VLAN_RX, + }, + { + .name = "l2_forward", + .counter_type = PPE_CNT_L2_FWD, + }, + { + .name = "cpu_code", + .counter_type = PPE_CNT_CPU_CODE, + }, + { + .name = "vlan_tx", + .counter_type = PPE_CNT_VLAN_TX, + }, + { + .name = "port_tx", + .counter_type = PPE_CNT_PORT_TX, + }, + { + .name = "qm", + .counter_type = PPE_CNT_QM, + }, +}; + +static int ppe_pkt_cnt_get(struct ppe_device *ppe_dev, u32 reg, + enum ppe_cnt_size_type cnt_type, + u32 *cnt, u32 *drop_cnt) +{ + u32 drop_pkt_cnt[PPE_DROP_PKT_CNT_TBL_SIZE]; + u32 pkt_cnt[PPE_PKT_CNT_TBL_SIZE]; + u32 value; + int ret; + + switch (cnt_type) { + case PPE_PKT_CNT_SIZE_1WORD: + ret = regmap_read(ppe_dev->regmap, reg, &value); + if (ret) + return ret; + + *cnt = value; + break; + case PPE_PKT_CNT_SIZE_3WORD: + ret = regmap_bulk_read(ppe_dev->regmap, reg, + pkt_cnt, ARRAY_SIZE(pkt_cnt)); + if (ret) + return ret; + + *cnt = PPE_GET_PKT_CNT(pkt_cnt); + break; + case PPE_PKT_CNT_SIZE_5WORD: + ret = regmap_bulk_read(ppe_dev->regmap, reg, + drop_pkt_cnt, ARRAY_SIZE(drop_pkt_cnt)); + if (ret) + return ret; + + *cnt = PPE_GET_PKT_CNT(drop_pkt_cnt); + + /* Drop counter with low 24 bits. */ + value = PPE_GET_DROP_PKT_CNT_LOW(drop_pkt_cnt); + *drop_cnt = FIELD_PREP(GENMASK(23, 0), value); + + /* Drop counter with high 8 bits. */ + value = PPE_GET_DROP_PKT_CNT_HIGH(drop_pkt_cnt); + *drop_cnt |= FIELD_PREP(GENMASK(31, 24), value); + break; + } + + return 0; +} + +static void ppe_tbl_pkt_cnt_clear(struct ppe_device *ppe_dev, u32 reg, + enum ppe_cnt_size_type cnt_type) +{ + u32 drop_pkt_cnt[PPE_DROP_PKT_CNT_TBL_SIZE] = {}; + u32 pkt_cnt[PPE_PKT_CNT_TBL_SIZE] = {}; + + switch (cnt_type) { + case PPE_PKT_CNT_SIZE_1WORD: + regmap_write(ppe_dev->regmap, reg, 0); + break; + case PPE_PKT_CNT_SIZE_3WORD: + regmap_bulk_write(ppe_dev->regmap, reg, + pkt_cnt, ARRAY_SIZE(pkt_cnt)); + break; + case PPE_PKT_CNT_SIZE_5WORD: + regmap_bulk_write(ppe_dev->regmap, reg, + drop_pkt_cnt, ARRAY_SIZE(drop_pkt_cnt)); + break; + } +} + +static int ppe_bm_counter_get(struct ppe_device *ppe_dev, struct seq_file *seq) +{ + u32 reg, val, pkt_cnt, pkt_cnt1; + int ret, i, tag; + + seq_printf(seq, "%-24s", "BM SILENT_DROP:"); + tag = 0; + for (i = 0; i < PPE_DROP_CNT_TBL_ENTRIES; i++) { + reg = PPE_DROP_CNT_TBL_ADDR + i * PPE_DROP_CNT_TBL_INC; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD, + &pkt_cnt, NULL); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + if (pkt_cnt > 0) { + if (!((++tag) % 4)) + seq_printf(seq, "\n%-24s", ""); + + seq_printf(seq, "%10u(%s=%04d)", pkt_cnt, "port", i); + } + } + + seq_putc(seq, '\n'); + + /* The number of packets dropped because hardware buffers were + * available only partially for the packet. + */ + seq_printf(seq, "%-24s", "BM OVERFLOW_DROP:"); + tag = 0; + for (i = 0; i < PPE_DROP_STAT_TBL_ENTRIES; i++) { + reg = PPE_DROP_STAT_TBL_ADDR + PPE_DROP_STAT_TBL_INC * i; + + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + if (pkt_cnt > 0) { + if (!((++tag) % 4)) + seq_printf(seq, "\n%-24s", ""); + + seq_printf(seq, "%10u(%s=%04d)", pkt_cnt, "port", i); + } + } + + seq_putc(seq, '\n'); + + /* The number of currently occupied buffers, that can't be flushed. */ + seq_printf(seq, "%-24s", "BM USED/REACT:"); + tag = 0; + for (i = 0; i < PPE_BM_USED_CNT_TBL_ENTRIES; i++) { + reg = PPE_BM_USED_CNT_TBL_ADDR + i * PPE_BM_USED_CNT_TBL_INC; + ret = regmap_read(ppe_dev->regmap, reg, &val); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + /* The number of PPE buffers used for caching the received + * packets before the pause frame sent. + */ + pkt_cnt = FIELD_GET(PPE_BM_USED_CNT_VAL, val); + + reg = PPE_BM_REACT_CNT_TBL_ADDR + i * PPE_BM_REACT_CNT_TBL_INC; + ret = regmap_read(ppe_dev->regmap, reg, &val); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + /* The number of PPE buffers used for caching the received + * packets after pause frame sent out. + */ + pkt_cnt1 = FIELD_GET(PPE_BM_REACT_CNT_VAL, val); + + if (pkt_cnt > 0 || pkt_cnt1 > 0) { + if (!((++tag) % 4)) + seq_printf(seq, "\n%-24s", ""); + + seq_printf(seq, "%10u/%u(%s=%04d)", pkt_cnt, pkt_cnt1, + "port", i); + } + } + + seq_putc(seq, '\n'); + + return 0; +} + +/* The number of packets processed by the ingress parser module of PPE. */ +static int ppe_parse_pkt_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, cnt = 0, tunnel_cnt = 0; + int i, ret, tag = 0; + + seq_printf(seq, "%-24s", "PARSE TPRX/IPRX:"); + for (i = 0; i < PPE_IPR_PKT_CNT_TBL_ENTRIES; i++) { + reg = PPE_TPR_PKT_CNT_TBL_ADDR + i * PPE_TPR_PKT_CNT_TBL_INC; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD, + &tunnel_cnt, NULL); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + reg = PPE_IPR_PKT_CNT_TBL_ADDR + i * PPE_IPR_PKT_CNT_TBL_INC; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD, + &cnt, NULL); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + if (tunnel_cnt > 0 || cnt > 0) { + if (!((++tag) % 4)) + seq_printf(seq, "\n%-24s", ""); + + seq_printf(seq, "%10u/%u(%s=%04d)", tunnel_cnt, cnt, + "port", i); + } + } + + seq_putc(seq, '\n'); + + return 0; +} + +/* The number of packets received or dropped on the ingress port. */ +static int ppe_port_rx_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt = 0, drop_cnt = 0; + int ret, i, tag; + + seq_printf(seq, "%-24s", "PORT RX/RX_DROP:"); + tag = 0; + for (i = 0; i < PPE_PHY_PORT_RX_CNT_TBL_ENTRIES; i++) { + reg = PPE_PHY_PORT_RX_CNT_TBL_ADDR + PPE_PHY_PORT_RX_CNT_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD, + &pkt_cnt, &drop_cnt); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + if (pkt_cnt > 0) { + if (!((++tag) % 4)) + seq_printf(seq, "\n%-24s", ""); + + seq_printf(seq, "%10u/%u(%s=%04d)", pkt_cnt, drop_cnt, + "port", i); + } + } + + seq_putc(seq, '\n'); + + seq_printf(seq, "%-24s", "VPORT RX/RX_DROP:"); + tag = 0; + for (i = 0; i < PPE_PORT_RX_CNT_TBL_ENTRIES; i++) { + reg = PPE_PORT_RX_CNT_TBL_ADDR + PPE_PORT_RX_CNT_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD, + &pkt_cnt, &drop_cnt); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + if (pkt_cnt > 0) { + if (!((++tag) % 4)) + seq_printf(seq, "\n%-24s", ""); + + seq_printf(seq, "%10u/%u(%s=%04d)", pkt_cnt, drop_cnt, + "port", i); + } + } + + seq_putc(seq, '\n'); + + return 0; +} + +/* The number of packets received or dropped by layer 2 processing. */ +static int ppe_l2_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt = 0, drop_cnt = 0; + int ret, i, tag = 0; + + seq_printf(seq, "%-24s", "L2 RX/RX_DROP:"); + for (i = 0; i < PPE_PRE_L2_CNT_TBL_ENTRIES; i++) { + reg = PPE_PRE_L2_CNT_TBL_ADDR + PPE_PRE_L2_CNT_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD, + &pkt_cnt, &drop_cnt); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + if (pkt_cnt > 0) { + if (!((++tag) % 4)) + seq_printf(seq, "\n%-24s", ""); + + seq_printf(seq, "%10u/%u(%s=%04d)", pkt_cnt, drop_cnt, + "vsi", i); + } + } + + seq_putc(seq, '\n'); + + return 0; +} + +/* The number of VLAN packets received by PPE. */ +static int ppe_vlan_rx_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt = 0; + int ret, i, tag = 0; + + seq_printf(seq, "%-24s", "VLAN RX:"); + for (i = 0; i < PPE_VLAN_CNT_TBL_ENTRIES; i++) { + reg = PPE_VLAN_CNT_TBL_ADDR + PPE_VLAN_CNT_TBL_INC * i; + + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + if (pkt_cnt > 0) { + if (!((++tag) % 4)) + seq_printf(seq, "\n%-24s", ""); + + seq_printf(seq, "%10u(%s=%04d)", pkt_cnt, "vsi", i); + } + } + + seq_putc(seq, '\n'); + + return 0; +} + +/* The number of packets handed to CPU by PPE. */ +static int ppe_cpu_code_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt = 0; + int ret, i; + + seq_printf(seq, "%-24s", "CPU CODE:"); + for (i = 0; i < PPE_DROP_CPU_CNT_TBL_ENTRIES; i++) { + reg = PPE_DROP_CPU_CNT_TBL_ADDR + PPE_DROP_CPU_CNT_TBL_INC * i; + + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + if (!pkt_cnt) + continue; + + /* There are 256 CPU codes saved in the first 256 entries + * of register table, and 128 drop codes for each PPE port + * (0-7), the total entries is 256 + 8 * 128. + */ + if (i < 256) + seq_printf(seq, "%10u(cpucode:%d)", pkt_cnt, i); + else + seq_printf(seq, "%10u(port=%d),dropcode:%d", pkt_cnt, + (i - 256) % 8, (i - 256) / 8); + seq_putc(seq, '\n'); + seq_printf(seq, "%-24s", ""); + } + + seq_putc(seq, '\n'); + + return 0; +} + +/* The number of packets forwarded by VLAN on the egress direction. */ +static int ppe_vlan_tx_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt = 0; + int ret, i, tag = 0; + + seq_printf(seq, "%-24s", "VLAN TX:"); + for (i = 0; i < PPE_EG_VSI_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_EG_VSI_COUNTER_TBL_ADDR + PPE_EG_VSI_COUNTER_TBL_INC * i; + + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + if (pkt_cnt > 0) { + if (!((++tag) % 4)) + seq_printf(seq, "\n%-24s", ""); + + seq_printf(seq, "%10u(%s=%04d)", pkt_cnt, "vsi", i); + } + } + + seq_putc(seq, '\n'); + + return 0; +} + +/* The number of packets transmitted or dropped on the egress port. */ +static int ppe_port_tx_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, pkt_cnt = 0, drop_cnt = 0; + int ret, i, tag; + + seq_printf(seq, "%-24s", "VPORT TX/TX_DROP:"); + tag = 0; + for (i = 0; i < PPE_VPORT_TX_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_VPORT_TX_COUNTER_TBL_ADDR + PPE_VPORT_TX_COUNTER_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + reg = PPE_VPORT_TX_DROP_CNT_TBL_ADDR + PPE_VPORT_TX_DROP_CNT_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &drop_cnt, NULL); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + if (pkt_cnt > 0 || drop_cnt > 0) { + if (!((++tag) % 4)) + seq_printf(seq, "\n%-24s", ""); + + seq_printf(seq, "%10u/%u(%s=%04d)", pkt_cnt, drop_cnt, + "port", i); + } + } + + seq_putc(seq, '\n'); + + seq_printf(seq, "%-24s", "PORT TX/TX_DROP:"); + tag = 0; + for (i = 0; i < PPE_PORT_TX_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_PORT_TX_COUNTER_TBL_ADDR + PPE_PORT_TX_COUNTER_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + reg = PPE_PORT_TX_DROP_CNT_TBL_ADDR + PPE_PORT_TX_DROP_CNT_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &drop_cnt, NULL); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + if (pkt_cnt > 0 || drop_cnt > 0) { + if (!((++tag) % 4)) + seq_printf(seq, "\n%-24s", ""); + + seq_printf(seq, "%10u/%u(%s=%04d)", pkt_cnt, drop_cnt, + "port", i); + } + } + + seq_putc(seq, '\n'); + + return 0; +} + +/* The number of packets transmitted or pending by the PPE queue. */ +static int ppe_queue_counter_get(struct ppe_device *ppe_dev, + struct seq_file *seq) +{ + u32 reg, val, pkt_cnt = 0, pend_cnt = 0; + int ret, i, tag = 0; + + seq_printf(seq, "%-24s", "QUEUE TX/PEND:"); + for (i = 0; i < PPE_QUEUE_TX_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_QUEUE_TX_COUNTER_TBL_ADDR + PPE_QUEUE_TX_COUNTER_TBL_INC * i; + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, + &pkt_cnt, NULL); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + if (i < PPE_AC_UNICAST_QUEUE_CFG_TBL_ENTRIES) { + reg = PPE_AC_UNICAST_QUEUE_CNT_TBL_ADDR + + PPE_AC_UNICAST_QUEUE_CNT_TBL_INC * i; + ret = regmap_read(ppe_dev->regmap, reg, &val); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + pend_cnt = FIELD_GET(PPE_AC_UNICAST_QUEUE_CNT_TBL_PEND_CNT, val); + } else { + reg = PPE_AC_MULTICAST_QUEUE_CNT_TBL_ADDR + + PPE_AC_MULTICAST_QUEUE_CNT_TBL_INC * + (i - PPE_AC_UNICAST_QUEUE_CFG_TBL_ENTRIES); + ret = regmap_read(ppe_dev->regmap, reg, &val); + if (ret) { + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); + return ret; + } + + pend_cnt = FIELD_GET(PPE_AC_MULTICAST_QUEUE_CNT_TBL_PEND_CNT, val); + } + + if (pkt_cnt > 0 || pend_cnt > 0) { + if (!((++tag) % 4)) + seq_printf(seq, "\n%-24s", ""); + + seq_printf(seq, "%10u/%u(%s=%04d)", pkt_cnt, pend_cnt, "queue", i); + } + } + + seq_putc(seq, '\n'); + + return 0; +} + +/* Display the various packet counters of PPE. */ +static int ppe_packet_counter_show(struct seq_file *seq, void *v) +{ + struct ppe_debugfs_entry *entry = seq->private; + struct ppe_device *ppe_dev = entry->ppe; + int ret; + + switch (entry->counter_type) { + case PPE_CNT_BM: + ret = ppe_bm_counter_get(ppe_dev, seq); + break; + case PPE_CNT_PARSE: + ret = ppe_parse_pkt_counter_get(ppe_dev, seq); + break; + case PPE_CNT_PORT_RX: + ret = ppe_port_rx_counter_get(ppe_dev, seq); + break; + case PPE_CNT_VLAN_RX: + ret = ppe_vlan_rx_counter_get(ppe_dev, seq); + break; + case PPE_CNT_L2_FWD: + ret = ppe_l2_counter_get(ppe_dev, seq); + break; + case PPE_CNT_CPU_CODE: + ret = ppe_cpu_code_counter_get(ppe_dev, seq); + break; + case PPE_CNT_VLAN_TX: + ret = ppe_vlan_tx_counter_get(ppe_dev, seq); + break; + case PPE_CNT_PORT_TX: + ret = ppe_port_tx_counter_get(ppe_dev, seq); + break; + case PPE_CNT_QM: + ret = ppe_queue_counter_get(ppe_dev, seq); + break; + default: + ret = -EINVAL; + break; + } + + return ret; +} + +/* Flush the various packet counters of PPE. */ +static ssize_t ppe_packet_counter_write(struct file *file, + const char __user *buf, + size_t count, loff_t *pos) +{ + struct ppe_debugfs_entry *entry = file_inode(file)->i_private; + struct ppe_device *ppe_dev = entry->ppe; + u32 reg; + int i; + + switch (entry->counter_type) { + case PPE_CNT_BM: + for (i = 0; i < PPE_DROP_CNT_TBL_ENTRIES; i++) { + reg = PPE_DROP_CNT_TBL_ADDR + i * PPE_DROP_CNT_TBL_INC; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD); + } + + for (i = 0; i < PPE_DROP_STAT_TBL_ENTRIES; i++) { + reg = PPE_DROP_STAT_TBL_ADDR + PPE_DROP_STAT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + break; + case PPE_CNT_PARSE: + for (i = 0; i < PPE_IPR_PKT_CNT_TBL_ENTRIES; i++) { + reg = PPE_IPR_PKT_CNT_TBL_ADDR + i * PPE_IPR_PKT_CNT_TBL_INC; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD); + + reg = PPE_TPR_PKT_CNT_TBL_ADDR + i * PPE_TPR_PKT_CNT_TBL_INC; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD); + } + + break; + case PPE_CNT_PORT_RX: + for (i = 0; i < PPE_PORT_RX_CNT_TBL_ENTRIES; i++) { + reg = PPE_PORT_RX_CNT_TBL_ADDR + PPE_PORT_RX_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD); + } + + for (i = 0; i < PPE_PHY_PORT_RX_CNT_TBL_ENTRIES; i++) { + reg = PPE_PHY_PORT_RX_CNT_TBL_ADDR + PPE_PHY_PORT_RX_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD); + } + + break; + case PPE_CNT_VLAN_RX: + for (i = 0; i < PPE_VLAN_CNT_TBL_ENTRIES; i++) { + reg = PPE_VLAN_CNT_TBL_ADDR + PPE_VLAN_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + break; + case PPE_CNT_L2_FWD: + for (i = 0; i < PPE_PRE_L2_CNT_TBL_ENTRIES; i++) { + reg = PPE_PRE_L2_CNT_TBL_ADDR + PPE_PRE_L2_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD); + } + + break; + case PPE_CNT_CPU_CODE: + for (i = 0; i < PPE_DROP_CPU_CNT_TBL_ENTRIES; i++) { + reg = PPE_DROP_CPU_CNT_TBL_ADDR + PPE_DROP_CPU_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + break; + case PPE_CNT_VLAN_TX: + for (i = 0; i < PPE_EG_VSI_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_EG_VSI_COUNTER_TBL_ADDR + PPE_EG_VSI_COUNTER_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + break; + case PPE_CNT_PORT_TX: + for (i = 0; i < PPE_PORT_TX_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_PORT_TX_DROP_CNT_TBL_ADDR + PPE_PORT_TX_DROP_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + + reg = PPE_PORT_TX_COUNTER_TBL_ADDR + PPE_PORT_TX_COUNTER_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + for (i = 0; i < PPE_VPORT_TX_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_VPORT_TX_COUNTER_TBL_ADDR + PPE_VPORT_TX_COUNTER_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + + reg = PPE_VPORT_TX_DROP_CNT_TBL_ADDR + PPE_VPORT_TX_DROP_CNT_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + break; + case PPE_CNT_QM: + for (i = 0; i < PPE_QUEUE_TX_COUNTER_TBL_ENTRIES; i++) { + reg = PPE_QUEUE_TX_COUNTER_TBL_ADDR + PPE_QUEUE_TX_COUNTER_TBL_INC * i; + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); + } + + break; + default: + break; + } + + return count; +} +DEFINE_SHOW_STORE_ATTRIBUTE(ppe_packet_counter); + +void ppe_debugfs_setup(struct ppe_device *ppe_dev) +{ + struct ppe_debugfs_entry *entry; + int i; + + ppe_dev->debugfs_root = debugfs_create_dir("ppe", NULL); + if (IS_ERR(ppe_dev->debugfs_root)) + return; + + for (i = 0; i < ARRAY_SIZE(debugfs_files); i++) { + entry = devm_kzalloc(ppe_dev->dev, sizeof(*entry), GFP_KERNEL); + if (!entry) + return; + + entry->ppe = ppe_dev; + entry->counter_type = debugfs_files[i].counter_type; + + debugfs_create_file(debugfs_files[i].name, 0444, + ppe_dev->debugfs_root, entry, + &ppe_packet_counter_fops); + } +} + +void ppe_debugfs_teardown(struct ppe_device *ppe_dev) +{ + debugfs_remove_recursive(ppe_dev->debugfs_root); + ppe_dev->debugfs_root = NULL; +} diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h new file mode 100644 index 000000000000..ba0a5b3af583 --- /dev/null +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2025 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +/* PPE debugfs counters setup. */ + +#ifndef __PPE_DEBUGFS_H__ +#define __PPE_DEBUGFS_H__ + +#include "ppe.h" + +void ppe_debugfs_setup(struct ppe_device *ppe_dev); +void ppe_debugfs_teardown(struct ppe_device *ppe_dev); + +#endif diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h index e990a9409598..3b5c539d8059 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h @@ -16,6 +16,36 @@ #define PPE_BM_SCH_CTRL_SCH_OFFSET GENMASK(14, 8) #define PPE_BM_SCH_CTRL_SCH_EN BIT(31) +/* PPE drop counters. */ +#define PPE_DROP_CNT_TBL_ADDR 0xb024 +#define PPE_DROP_CNT_TBL_ENTRIES 8 +#define PPE_DROP_CNT_TBL_INC 4 + +/* BM port drop counters. */ +#define PPE_DROP_STAT_TBL_ADDR 0xe000 +#define PPE_DROP_STAT_TBL_ENTRIES 30 +#define PPE_DROP_STAT_TBL_INC 0x10 + +/* Egress VLAN counters. */ +#define PPE_EG_VSI_COUNTER_TBL_ADDR 0x41000 +#define PPE_EG_VSI_COUNTER_TBL_ENTRIES 64 +#define PPE_EG_VSI_COUNTER_TBL_INC 0x10 + +/* Port TX counters. */ +#define PPE_PORT_TX_COUNTER_TBL_ADDR 0x45000 +#define PPE_PORT_TX_COUNTER_TBL_ENTRIES 8 +#define PPE_PORT_TX_COUNTER_TBL_INC 0x10 + +/* Virtual port TX counters. */ +#define PPE_VPORT_TX_COUNTER_TBL_ADDR 0x47000 +#define PPE_VPORT_TX_COUNTER_TBL_ENTRIES 256 +#define PPE_VPORT_TX_COUNTER_TBL_INC 0x10 + +/* Queue counters. */ +#define PPE_QUEUE_TX_COUNTER_TBL_ADDR 0x4a000 +#define PPE_QUEUE_TX_COUNTER_TBL_ENTRIES 300 +#define PPE_QUEUE_TX_COUNTER_TBL_INC 0x10 + /* RSS settings are to calculate the random RSS hash value generated during * packet receive to ARM cores. This hash is then used to generate the queue * offset used to determine the queue used to transmit the packet to ARM cores. @@ -213,6 +243,51 @@ #define PPE_L2_PORT_SET_DST_INFO(tbl_cfg, value) \ FIELD_MODIFY(PPE_L2_VP_PORT_W0_DST_INFO, tbl_cfg, value) +/* Port RX and RX drop counters. */ +#define PPE_PORT_RX_CNT_TBL_ADDR 0x150000 +#define PPE_PORT_RX_CNT_TBL_ENTRIES 256 +#define PPE_PORT_RX_CNT_TBL_INC 0x20 + +/* Physical port RX and RX drop counters. */ +#define PPE_PHY_PORT_RX_CNT_TBL_ADDR 0x156000 +#define PPE_PHY_PORT_RX_CNT_TBL_ENTRIES 8 +#define PPE_PHY_PORT_RX_CNT_TBL_INC 0x20 + +/* Counters for the packet to CPU port. */ +#define PPE_DROP_CPU_CNT_TBL_ADDR 0x160000 +#define PPE_DROP_CPU_CNT_TBL_ENTRIES 1280 +#define PPE_DROP_CPU_CNT_TBL_INC 0x10 + +/* VLAN counters. */ +#define PPE_VLAN_CNT_TBL_ADDR 0x178000 +#define PPE_VLAN_CNT_TBL_ENTRIES 64 +#define PPE_VLAN_CNT_TBL_INC 0x10 + +/* PPE L2 counters. */ +#define PPE_PRE_L2_CNT_TBL_ADDR 0x17c000 +#define PPE_PRE_L2_CNT_TBL_ENTRIES 64 +#define PPE_PRE_L2_CNT_TBL_INC 0x20 + +/* Port TX drop counters. */ +#define PPE_PORT_TX_DROP_CNT_TBL_ADDR 0x17d000 +#define PPE_PORT_TX_DROP_CNT_TBL_ENTRIES 8 +#define PPE_PORT_TX_DROP_CNT_TBL_INC 0x10 + +/* Virtual port TX counters. */ +#define PPE_VPORT_TX_DROP_CNT_TBL_ADDR 0x17e000 +#define PPE_VPORT_TX_DROP_CNT_TBL_ENTRIES 256 +#define PPE_VPORT_TX_DROP_CNT_TBL_INC 0x10 + +/* Counters for the tunnel packet. */ +#define PPE_TPR_PKT_CNT_TBL_ADDR 0x1d0080 +#define PPE_TPR_PKT_CNT_TBL_ENTRIES 8 +#define PPE_TPR_PKT_CNT_TBL_INC 4 + +/* Counters for the all packet received. */ +#define PPE_IPR_PKT_CNT_TBL_ADDR 0x1e0080 +#define PPE_IPR_PKT_CNT_TBL_ENTRIES 8 +#define PPE_IPR_PKT_CNT_TBL_INC 4 + /* PPE service code configuration for the tunnel packet. */ #define PPE_TL_SERVICE_TBL_ADDR 0x306000 #define PPE_TL_SERVICE_TBL_ENTRIES 256 @@ -325,6 +400,18 @@ #define PPE_BM_PORT_GROUP_ID_INC 0x4 #define PPE_BM_PORT_GROUP_ID_SHARED_GROUP_ID GENMASK(1, 0) +/* Counters for PPE buffers used for packets cached. */ +#define PPE_BM_USED_CNT_TBL_ADDR 0x6001c0 +#define PPE_BM_USED_CNT_TBL_ENTRIES 15 +#define PPE_BM_USED_CNT_TBL_INC 0x4 +#define PPE_BM_USED_CNT_VAL GENMASK(10, 0) + +/* Counters for PPE buffers used for packets received after pause frame sent. */ +#define PPE_BM_REACT_CNT_TBL_ADDR 0x600240 +#define PPE_BM_REACT_CNT_TBL_ENTRIES 15 +#define PPE_BM_REACT_CNT_TBL_INC 0x4 +#define PPE_BM_REACT_CNT_VAL GENMASK(8, 0) + #define PPE_BM_SHARED_GROUP_CFG_ADDR 0x600290 #define PPE_BM_SHARED_GROUP_CFG_ENTRIES 4 #define PPE_BM_SHARED_GROUP_CFG_INC 0x4 @@ -449,6 +536,18 @@ #define PPE_AC_GRP_SET_BUF_LIMIT(tbl_cfg, value) \ FIELD_MODIFY(PPE_AC_GRP_W1_BUF_LIMIT, (tbl_cfg) + 0x1, value) +/* Counters for packets handled by unicast queues (0-255). */ +#define PPE_AC_UNICAST_QUEUE_CNT_TBL_ADDR 0x84e000 +#define PPE_AC_UNICAST_QUEUE_CNT_TBL_ENTRIES 256 +#define PPE_AC_UNICAST_QUEUE_CNT_TBL_INC 0x10 +#define PPE_AC_UNICAST_QUEUE_CNT_TBL_PEND_CNT GENMASK(12, 0) + +/* Counters for packets handled by multicast queues (256-299). */ +#define PPE_AC_MULTICAST_QUEUE_CNT_TBL_ADDR 0x852000 +#define PPE_AC_MULTICAST_QUEUE_CNT_TBL_ENTRIES 44 +#define PPE_AC_MULTICAST_QUEUE_CNT_TBL_INC 0x10 +#define PPE_AC_MULTICAST_QUEUE_CNT_TBL_PEND_CNT GENMASK(12, 0) + /* Table addresses for per-queue enqueue setting. */ #define PPE_ENQ_OPR_TBL_ADDR 0x85c000 #define PPE_ENQ_OPR_TBL_ENTRIES 300