From patchwork Tue May 13 09:58:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luo Jie X-Patchwork-Id: 889667 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 703AE2459F9; Tue, 13 May 2025 09:59:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130372; cv=none; b=cXI3VxhaYSyEnwtuvcSgTa7owAyUMM7EWlmb9plV/EX/mbRNsfHBRe5NZMEoiVmTFUUjDHftdCrVlVzfI54qNcgjZ9n7nl7TJXQ9s0ky7cA8ppAPnZW0zC/IwfDEMeglxGyrueL4+ndmdD0AhXnxnGSOLeFfsZM5OtWLgO3+dpw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130372; c=relaxed/simple; bh=yzMSitsqfhhknfUJ11NwuquorZc0f5v+3569fGjuXcc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=tyYBmjHsspjgN9CsIKG/eczG4f2irwEiuxcpAOrWMUV9xp3jgWUS2p5EALrZ/9tTlotx72c5xflffZo8LEbjV+8DYsvLh8LmBU4EC0FF4stkY87Od0IKVL38LnuoRTmgbKNJwzhqEqiiw2oVcSHNN7W5++53IRUg1nyAw2hm0Ws= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=K1t4lj9U; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="K1t4lj9U" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54D71P7Y023824; Tue, 13 May 2025 09:59:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= bd8hLR8X9VciYwzp64B8QPqIye2Otnc6P8Z1EViygaU=; b=K1t4lj9U+FPGHUkO blYsPtZPAgi2EhZuo6KAgSDXTQbzE4o8z3l58i03x9muxSVtiQR+iiw0GP4+ijPB 4wK6RPwjhmT5XAM1Ad93BNKezK51PTUH+XjzurUwTRaT5Q3g6jzWSG7iGBmR+3by fNEKkB8yW9Pfal9q3nBbCC9bzRvaXzLmo/I8z412bMMn/NneIhvrzKT4K2sI61Cx 08ZzCPGDMFo4Ft/rM8SgOpNhMycGJJq2iHNqyZdNSkGrWZyf89BlEarmO1+VX7Sy LX4z2W1cEQC+8lWSQdNej+Ap/+wgrFMci3iabQEcJyO4/a6PIXEuRKrLtI8aZhPg VqDrRQ== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 46hx4k7adn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:17 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 54D9xFfu017795 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:15 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 13 May 2025 02:59:09 -0700 From: Luo Jie Date: Tue, 13 May 2025 17:58:27 +0800 Subject: [PATCH net-next v4 07/14] net: ethernet: qualcomm: Initialize PPE queue settings Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250513-qcom_ipq_ppe-v4-7-4fbe40cbbb71@quicinc.com> References: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> In-Reply-To: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747130311; l=16401; i=quic_luoj@quicinc.com; s=20250209; h=from:subject:message-id; bh=yzMSitsqfhhknfUJ11NwuquorZc0f5v+3569fGjuXcc=; b=DBWlwZw9Zp2i7npfCjEMwyVjTUqJXseX+RITSjKQ8s3PKZxV1E0M0TZAGrvemifY7sHhsnxfa RXHDb7jwCH9BA097kqEF7aBW7tCNEC8P+SuWQwI7oOHzamo5jf2IpKN X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=pzwy8bU5tJZ5UKGTv28n+QOuktaWuriznGmriA9Qkfc= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Authority-Analysis: v=2.4 cv=ReqQC0tv c=1 sm=1 tr=0 ts=682317f5 cx=c_pps a=JYp8KDb2vCoCEuGobkYCKw==:117 a=JYp8KDb2vCoCEuGobkYCKw==:17 a=GEpy-HfZoHoA:10 a=IkcTkHD0fZMA:10 a=dt9VzEwgFbYA:10 a=COk6AnOGAAAA:8 a=PNRkUbp5dbSH4Vkeo0cA:9 a=CNskAabLYMlJ57OS:21 a=QEXdDO2ut3YA:10 a=TjNXssC_j7lpFel5tvFf:22 X-Proofpoint-GUID: TgJBpECQ619BXv38SP-TTtfef7U8IliY X-Proofpoint-ORIG-GUID: TgJBpECQ619BXv38SP-TTtfef7U8IliY X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTEzMDA5NCBTYWx0ZWRfXxHB25Rcwy4NE w7kprKz7w2vmwqtp8xSkf6k2LZLCm8VayXM2WmelLXaEPhxSOoD61M2QByStg0y/X3htJo/VEFF lMNt3VeVJL+MUEhcan3xlJYXPjdUMkNk+4LiTpJJ8VXrynn6ThT6WXy8vlsDd9zSX+YXRjqWVwI A6DTn7Eg7DlbiTIskq05Ga/LqxJSb7zTcVkX/800pPkWPg+7RuFsDq1dfMYUNJCQdIV+kz/THeX N1r3mN6oKJhQenTyDJ10RM9JEQ/CMzbimk/Uca0xRnu9IblzunuU6BZ/5bU50VRoI5JkXG/uovu PMjnTjCNUI9vDsy5ZPH1I4ymy7be+go1N6U0K1ZRwgqQniAJORhVlvSpZ3zGgKfAXPLWOReWwRA LZN7pOfFhaAcKDpz09olO4xya7/OfTIrmFaH635JoUetERjx+0njUCRFuusyBE6cw8ZsTb2p X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-12_07,2025-05-09_01,2025-02-21_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 mlxscore=0 mlxlogscore=999 phishscore=0 suspectscore=0 spamscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 clxscore=1015 adultscore=0 malwarescore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2504070000 definitions=main-2505130094 Configure unicast and multicast hardware queues for the PPE ports to enable packet forwarding between the ports. Each PPE port is assigned with a range of queues. The queue ID selection for a packet is decided by the queue base and queue offset that is configured based on the internal priority and the RSS hash value of the packet. Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/ppe/ppe_config.c | 356 ++++++++++++++++++++++++- drivers/net/ethernet/qualcomm/ppe/ppe_config.h | 63 +++++ drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 21 ++ 3 files changed, 439 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c index fe2d44ab59cb..e7b3921e85ec 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c @@ -138,6 +138,34 @@ struct ppe_scheduler_port_config { unsigned int drr_node_id; }; +/** + * struct ppe_port_schedule_resource - PPE port scheduler resource. + * @ucastq_start: Unicast queue start ID. + * @ucastq_end: Unicast queue end ID. + * @mcastq_start: Multicast queue start ID. + * @mcastq_end: Multicast queue end ID. + * @flow_id_start: Flow start ID. + * @flow_id_end: Flow end ID. + * @l0node_start: Scheduler node start ID for queue level. + * @l0node_end: Scheduler node end ID for queue level. + * @l1node_start: Scheduler node start ID for flow level. + * @l1node_end: Scheduler node end ID for flow level. + * + * PPE scheduler resource allocated among the PPE ports. + */ +struct ppe_port_schedule_resource { + unsigned int ucastq_start; + unsigned int ucastq_end; + unsigned int mcastq_start; + unsigned int mcastq_end; + unsigned int flow_id_start; + unsigned int flow_id_end; + unsigned int l0node_start; + unsigned int l0node_end; + unsigned int l1node_start; + unsigned int l1node_end; +}; + /* There are total 2048 buffers available in PPE, out of which some * buffers are reserved for some specific purposes per PPE port. The * rest of the pool of 1550 buffers are assigned to the general 'group0' @@ -701,6 +729,111 @@ static const struct ppe_scheduler_port_config ppe_port_sch_config[] = { }, }; +/* The scheduler resource is applied to each PPE port, The resource + * includes the unicast & multicast queues, flow nodes and DRR nodes. + */ +static const struct ppe_port_schedule_resource ppe_scheduler_res[] = { + { .ucastq_start = 0, + .ucastq_end = 63, + .mcastq_start = 256, + .mcastq_end = 271, + .flow_id_start = 0, + .flow_id_end = 0, + .l0node_start = 0, + .l0node_end = 7, + .l1node_start = 0, + .l1node_end = 0, + }, + { .ucastq_start = 144, + .ucastq_end = 159, + .mcastq_start = 272, + .mcastq_end = 275, + .flow_id_start = 36, + .flow_id_end = 39, + .l0node_start = 48, + .l0node_end = 63, + .l1node_start = 8, + .l1node_end = 11, + }, + { .ucastq_start = 160, + .ucastq_end = 175, + .mcastq_start = 276, + .mcastq_end = 279, + .flow_id_start = 40, + .flow_id_end = 43, + .l0node_start = 64, + .l0node_end = 79, + .l1node_start = 12, + .l1node_end = 15, + }, + { .ucastq_start = 176, + .ucastq_end = 191, + .mcastq_start = 280, + .mcastq_end = 283, + .flow_id_start = 44, + .flow_id_end = 47, + .l0node_start = 80, + .l0node_end = 95, + .l1node_start = 16, + .l1node_end = 19, + }, + { .ucastq_start = 192, + .ucastq_end = 207, + .mcastq_start = 284, + .mcastq_end = 287, + .flow_id_start = 48, + .flow_id_end = 51, + .l0node_start = 96, + .l0node_end = 111, + .l1node_start = 20, + .l1node_end = 23, + }, + { .ucastq_start = 208, + .ucastq_end = 223, + .mcastq_start = 288, + .mcastq_end = 291, + .flow_id_start = 52, + .flow_id_end = 55, + .l0node_start = 112, + .l0node_end = 127, + .l1node_start = 24, + .l1node_end = 27, + }, + { .ucastq_start = 224, + .ucastq_end = 239, + .mcastq_start = 292, + .mcastq_end = 295, + .flow_id_start = 56, + .flow_id_end = 59, + .l0node_start = 128, + .l0node_end = 143, + .l1node_start = 28, + .l1node_end = 31, + }, + { .ucastq_start = 240, + .ucastq_end = 255, + .mcastq_start = 296, + .mcastq_end = 299, + .flow_id_start = 60, + .flow_id_end = 63, + .l0node_start = 144, + .l0node_end = 159, + .l1node_start = 32, + .l1node_end = 35, + }, + { .ucastq_start = 64, + .ucastq_end = 143, + .mcastq_start = 0, + .mcastq_end = 0, + .flow_id_start = 1, + .flow_id_end = 35, + .l0node_start = 8, + .l0node_end = 47, + .l1node_start = 1, + .l1node_end = 7, + }, +}; + /* Set the PPE queue level scheduler configuration. */ static int ppe_scheduler_l0_queue_map_set(struct ppe_device *ppe_dev, int node_id, int port, @@ -832,6 +965,149 @@ int ppe_queue_scheduler_set(struct ppe_device *ppe_dev, port, scheduler_cfg); } +/** + * ppe_queue_ucast_base_set - Set PPE unicast queue base ID and profile ID + * @ppe_dev: PPE device + * @queue_dst: PPE queue destination configuration + * @queue_base: PPE queue base ID + * @profile_id: Profile ID + * + * The PPE unicast queue base ID and profile ID are configured based on the + * destination port information that can be service code or CPU code or the + * destination port. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_queue_ucast_base_set(struct ppe_device *ppe_dev, + struct ppe_queue_ucast_dest queue_dst, + int queue_base, int profile_id) +{ + int index, profile_size; + u32 val, reg; + + profile_size = queue_dst.src_profile << 8; + if (queue_dst.service_code_en) + index = PPE_QUEUE_BASE_SERVICE_CODE + profile_size + + queue_dst.service_code; + else if (queue_dst.cpu_code_en) + index = PPE_QUEUE_BASE_CPU_CODE + profile_size + + queue_dst.cpu_code; + else + index = profile_size + queue_dst.dest_port; + + val = FIELD_PREP(PPE_UCAST_QUEUE_MAP_TBL_PROFILE_ID, profile_id); + val |= FIELD_PREP(PPE_UCAST_QUEUE_MAP_TBL_QUEUE_ID, queue_base); + reg = PPE_UCAST_QUEUE_MAP_TBL_ADDR + index * PPE_UCAST_QUEUE_MAP_TBL_INC; + + return regmap_write(ppe_dev->regmap, reg, val); +} + +/** + * ppe_queue_ucast_offset_pri_set - Set PPE unicast queue offset based on priority + * @ppe_dev: PPE device + * @profile_id: Profile ID + * @priority: PPE internal priority to be used to set queue offset + * @queue_offset: Queue offset used for calculating the destination queue ID + * + * The PPE unicast queue offset is configured based on the PPE + * internal priority. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_queue_ucast_offset_pri_set(struct ppe_device *ppe_dev, + int profile_id, + int priority, + int queue_offset) +{ + u32 val, reg; + int index; + + index = (profile_id << 4) + priority; + val = FIELD_PREP(PPE_UCAST_PRIORITY_MAP_TBL_CLASS, queue_offset); + reg = PPE_UCAST_PRIORITY_MAP_TBL_ADDR + index * PPE_UCAST_PRIORITY_MAP_TBL_INC; + + return regmap_write(ppe_dev->regmap, reg, val); +} + +/** + * ppe_queue_ucast_offset_hash_set - Set PPE unicast queue offset based on hash + * @ppe_dev: PPE device + * @profile_id: Profile ID + * @rss_hash: Packet hash value to be used to set queue offset + * @queue_offset: Queue offset used for calculating the destination queue ID + * + * The PPE unicast queue offset is configured based on the RSS hash value. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_queue_ucast_offset_hash_set(struct ppe_device *ppe_dev, + int profile_id, + int rss_hash, + int queue_offset) +{ + u32 val, reg; + int index; + + index = (profile_id << 8) + rss_hash; + val = FIELD_PREP(PPE_UCAST_HASH_MAP_TBL_HASH, queue_offset); + reg = PPE_UCAST_HASH_MAP_TBL_ADDR + index * PPE_UCAST_HASH_MAP_TBL_INC; + + return regmap_write(ppe_dev->regmap, reg, val); +} + +/** + * ppe_port_resource_get - Get PPE resource per port + * @ppe_dev: PPE device + * @port: PPE port + * @type: Resource type + * @res_start: Resource start ID returned + * @res_end: Resource end ID returned + * + * PPE resource is assigned per PPE port, which is acquired for QoS scheduler. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_port_resource_get(struct ppe_device *ppe_dev, int port, + enum ppe_resource_type type, + int *res_start, int *res_end) +{ + struct ppe_port_schedule_resource res; + + /* The reserved resource with the maximum port ID of PPE is + * also allowed to be acquired. + */ + if (port > ppe_dev->num_ports) + return -EINVAL; + + res = ppe_scheduler_res[port]; + switch (type) { + case PPE_RES_UCAST: + *res_start = res.ucastq_start; + *res_end = res.ucastq_end; + break; + case PPE_RES_MCAST: + *res_start = res.mcastq_start; + *res_end = res.mcastq_end; + break; + case PPE_RES_FLOW_ID: + *res_start = res.flow_id_start; + *res_end = res.flow_id_end; + break; + case PPE_RES_L0_NODE: + *res_start = res.l0node_start; + *res_end = res.l0node_end; + break; + case PPE_RES_L1_NODE: + *res_start = res.l1node_start; + *res_end = res.l1node_end; + break; + default: + return -EINVAL; + } + + return 0; +} + static int ppe_config_bm_threshold(struct ppe_device *ppe_dev, int bm_port_id, const struct ppe_bm_port_config port_cfg) { @@ -1167,6 +1443,80 @@ static int ppe_config_scheduler(struct ppe_device *ppe_dev) return ret; }; +/* Configure PPE queue destination of each PPE port. */ +static int ppe_queue_dest_init(struct ppe_device *ppe_dev) +{ + int ret, port_id, index, q_base, q_offset, res_start, res_end, pri_max; + struct ppe_queue_ucast_dest queue_dst; + + for (port_id = 0; port_id < ppe_dev->num_ports; port_id++) { + memset(&queue_dst, 0, sizeof(queue_dst)); + + ret = ppe_port_resource_get(ppe_dev, port_id, PPE_RES_UCAST, + &res_start, &res_end); + if (ret) + return ret; + + q_base = res_start; + queue_dst.dest_port = port_id; + + /* Configure queue base ID and profile ID that is same as + * physical port ID. + */ + ret = ppe_queue_ucast_base_set(ppe_dev, queue_dst, + q_base, port_id); + if (ret) + return ret; + + /* Queue priority range supported by each PPE port */ + ret = ppe_port_resource_get(ppe_dev, port_id, PPE_RES_L0_NODE, + &res_start, &res_end); + if (ret) + return ret; + + pri_max = res_end - res_start; + + /* Redirect ARP reply packet with the max priority on CPU port, + * which keeps the ARP reply directed to CPU (CPU code is 101) + * with highest priority queue of EDMA. + */ + if (port_id == 0) { + memset(&queue_dst, 0, sizeof(queue_dst)); + + queue_dst.cpu_code_en = true; + queue_dst.cpu_code = 101; + ret = ppe_queue_ucast_base_set(ppe_dev, queue_dst, + q_base + pri_max, + 0); + if (ret) + return ret; + } + + /* Initialize the queue offset of internal priority. */ + for (index = 0; index < PPE_QUEUE_INTER_PRI_NUM; index++) { + q_offset = index > pri_max ? pri_max : index; + + ret = ppe_queue_ucast_offset_pri_set(ppe_dev, port_id, + index, q_offset); + if (ret) + return ret; + } + + /* Initialize the queue offset of RSS hash as 0 to avoid the + * random hardware value that will lead to the unexpected + * destination queue generated. + */ + for (index = 0; index < PPE_QUEUE_HASH_NUM; index++) { + ret = ppe_queue_ucast_offset_hash_set(ppe_dev, port_id, + index, 0); + if (ret) + return ret; + } + } + + return 0; +} + int ppe_hw_config(struct ppe_device *ppe_dev) { int ret; @@ -1179,5 +1529,9 @@ int ppe_hw_config(struct ppe_device *ppe_dev) if (ret) return ret; - return ppe_config_scheduler(ppe_dev); + ret = ppe_config_scheduler(ppe_dev); + if (ret) + return ret; + + return ppe_queue_dest_init(ppe_dev); } diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h index f28cfe7e1548..6553da34effe 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h @@ -8,6 +8,16 @@ #include "ppe.h" +/* There are different table index ranges for configuring queue base ID of + * the destination port, CPU code and service code. + */ +#define PPE_QUEUE_BASE_DEST_PORT 0 +#define PPE_QUEUE_BASE_CPU_CODE 1024 +#define PPE_QUEUE_BASE_SERVICE_CODE 2048 + +#define PPE_QUEUE_INTER_PRI_NUM 16 +#define PPE_QUEUE_HASH_NUM 256 + /** * enum ppe_scheduler_frame_mode - PPE scheduler frame mode. * @PPE_SCH_WITH_IPG_PREAMBLE_FRAME_CRC: The scheduled frame includes IPG, @@ -42,8 +52,61 @@ struct ppe_scheduler_cfg { enum ppe_scheduler_frame_mode frame_mode; }; +/** + * enum ppe_resource_type - PPE resource type. + * @PPE_RES_UCAST: Unicast queue resource. + * @PPE_RES_MCAST: Multicast queue resource. + * @PPE_RES_L0_NODE: Level 0 for queue based node resource. + * @PPE_RES_L1_NODE: Level 1 for flow based node resource. + * @PPE_RES_FLOW_ID: Flow based node resource. + */ +enum ppe_resource_type { + PPE_RES_UCAST, + PPE_RES_MCAST, + PPE_RES_L0_NODE, + PPE_RES_L1_NODE, + PPE_RES_FLOW_ID, +}; + +/** + * struct ppe_queue_ucast_dest - PPE unicast queue destination. + * @src_profile: Source profile. + * @service_code_en: Enable service code to map the queue base ID. + * @service_code: Service code. + * @cpu_code_en: Enable CPU code to map the queue base ID. + * @cpu_code: CPU code. + * @dest_port: destination port. + * + * PPE egress queue ID is decided by the service code if enabled, otherwise + * by the CPU code if enabled, or by destination port if both service code + * and CPU code are disabled. + */ +struct ppe_queue_ucast_dest { + int src_profile; + bool service_code_en; + int service_code; + bool cpu_code_en; + int cpu_code; + int dest_port; +}; + int ppe_hw_config(struct ppe_device *ppe_dev); int ppe_queue_scheduler_set(struct ppe_device *ppe_dev, int node_id, bool flow_level, int port, struct ppe_scheduler_cfg scheduler_cfg); +int ppe_queue_ucast_base_set(struct ppe_device *ppe_dev, + struct ppe_queue_ucast_dest queue_dst, + int queue_base, + int profile_id); +int ppe_queue_ucast_offset_pri_set(struct ppe_device *ppe_dev, + int profile_id, + int priority, + int queue_offset); +int ppe_queue_ucast_offset_hash_set(struct ppe_device *ppe_dev, + int profile_id, + int rss_hash, + int queue_offset); +int ppe_port_resource_get(struct ppe_device *ppe_dev, int port, + enum ppe_resource_type type, + int *res_start, int *res_end); #endif diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h index a1982fbecee7..5996fd40eb0a 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h @@ -164,6 +164,27 @@ #define PPE_BM_PORT_FC_SET_PRE_ALLOC(tbl_cfg, value) \ FIELD_MODIFY(PPE_BM_PORT_FC_W1_PRE_ALLOC, (tbl_cfg) + 0x1, value) +/* The queue base configurations based on destination port, + * service code or CPU code. + */ +#define PPE_UCAST_QUEUE_MAP_TBL_ADDR 0x810000 +#define PPE_UCAST_QUEUE_MAP_TBL_ENTRIES 3072 +#define PPE_UCAST_QUEUE_MAP_TBL_INC 0x10 +#define PPE_UCAST_QUEUE_MAP_TBL_PROFILE_ID GENMASK(3, 0) +#define PPE_UCAST_QUEUE_MAP_TBL_QUEUE_ID GENMASK(11, 4) + +/* The queue offset configurations based on RSS hash value. */ +#define PPE_UCAST_HASH_MAP_TBL_ADDR 0x830000 +#define PPE_UCAST_HASH_MAP_TBL_ENTRIES 4096 +#define PPE_UCAST_HASH_MAP_TBL_INC 0x10 +#define PPE_UCAST_HASH_MAP_TBL_HASH GENMASK(7, 0) + +/* The queue offset configurations based on PPE internal priority. */ +#define PPE_UCAST_PRIORITY_MAP_TBL_ADDR 0x842000 +#define PPE_UCAST_PRIORITY_MAP_TBL_ENTRIES 256 +#define PPE_UCAST_PRIORITY_MAP_TBL_INC 0x10 +#define PPE_UCAST_PRIORITY_MAP_TBL_CLASS GENMASK(3, 0) + /* PPE unicast queue (0-255) configurations. */ #define PPE_AC_UNICAST_QUEUE_CFG_TBL_ADDR 0x848000 #define PPE_AC_UNICAST_QUEUE_CFG_TBL_ENTRIES 256