From patchwork Tue Sep 10 09:23:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 827265 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 093BA1862B8; Tue, 10 Sep 2024 09:24:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725960256; cv=none; b=fgjD1qWDRFyglGq8VgIg9pgUwtvPuW4CXJIKe0TpzrIMmcA8s9UwTwKb1ta89mQvV1qeK92XBMTfkR9/ELe5AcMs+wucDcWHC3YAuUvo39mgDRXDrq25Sp7MruC+74+ZMrGWlccby+YtNdQhXWhZIoAqKjfkaPhhjaS6CTYwDDs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725960256; c=relaxed/simple; bh=oiMDr8GVpy2/eCsn4LQi/DBcLJpd8KocoE9O2gr3Ps8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=cj5lZJjdrUf1NlgMVfl4zozB8Ife+OtKY8z3moDBIBgtU+LIAuld9APBppr24OwAlARg4VEZyDXjYpqc4YdwWJ7zSs6rZwF2wMUMIglYu+t8s2EvBiV+QavY0OHKFp8ucRyU/ny5xWIfLP4gu7ztvzXKg6Le/dh3gq0CDG28AjU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=A1mewtVy; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="A1mewtVy" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E2236C4CEC6; Tue, 10 Sep 2024 09:24:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1725960255; bh=oiMDr8GVpy2/eCsn4LQi/DBcLJpd8KocoE9O2gr3Ps8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=A1mewtVy0uIDfbixLaAnUa0/HctR5D+BpsfY58FvG+8ozQv4VV2dyyp2Oj8kHRhOc DI5d9FpzQwSwurFU67eOitEOdSR02OLyx+d2dXba2FhzC3xjOwA5PJwUAnPlD2r92+ 5IM1e99kfkKdZB20+pytFAKC9nrAch2DDK7toV4kwkEsYlT3iKznTtcQoV5G9ZB3fi fxq0AK+okP9ylT/dVJbseFKkP/5SMXnoeDhXnWIJrC+eXrIGveuE9R5kJTTCYKg9Ok IV0Ejib9CgfoHN4G0mCsPIeZJ3FqRmloubmmhKcgoFxbKewn5pYKqtGQg8na2Xkee2 ArZIpYJj8YM1A== From: Roger Quadros Date: Tue, 10 Sep 2024 12:23:58 +0300 Subject: [PATCH net-next v4 1/6] net: ethernet: ti: am65-cpsw: Introduce multi queue Rx Precedence: bulk X-Mailing-List: linux-omap@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240910-am65-cpsw-multi-rx-v4-1-077fa6403043@kernel.org> References: <20240910-am65-cpsw-multi-rx-v4-0-077fa6403043@kernel.org> In-Reply-To: <20240910-am65-cpsw-multi-rx-v4-0-077fa6403043@kernel.org> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Siddharth Vadapalli , Julien Panis , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Simon Horman , Andrew Lunn , Joe Damato , srk@ti.com, vigneshr@ti.com, danishanwar@ti.com, pekka Varis , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=34010; i=rogerq@kernel.org; h=from:subject:message-id; bh=oiMDr8GVpy2/eCsn4LQi/DBcLJpd8KocoE9O2gr3Ps8=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBm4BA0lpO9weXNRuzYjzLiy/EkFCTT9ESQnz5ds IQjcHhn6jKJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZuAQNAAKCRDSWmvTvnYw k9fxD/4tLHjnCm7ZRdlJLvWuLKyGY3ZlTC6Jm2aJQG9sJvoZah4Q5X/+eZ7O/Jhr0CxFzTIihwb 45gWvGRipwKSlpl6npeC8mm7Aj1E3FzkcXoqxmRh+Ml87OkhTPlTLIcpv8aluPHisYEQjOA08NO xa7WmbBarYax0tBZvaPoqJHfUGClIJTRDZMUMxp+OJGIaqeLq2rcnpvcLfTdck/IcqNp2sdQ3ID sMI8fqovQxXau5JzNPm50ofX/fGRACdzYZfuDM+en2rv74juzw5tKLemSIbjm/wso/62ri+0do7 ZhokV4QZj7qySk+8yXg2WvgK4YtbI1uM5Qahs6X4Q8ctlUFXmpCUt6QDqVuXdc9Z4i/YPT7PLvz BPpPakQxm2wdFKWWMOhprLXE1qbH5ITXSfNbAJWinkw45mS7cqyaCMW3S1n2y2H0S3HixYCbmFg +fcjPwY1bGpGOiIVsBN08fmHXIJKFDuw5hB7NCvZ0MWSu3zJXMZNqtJ51+1z8M7uxfN/P9Evg6z vuzCnpYS/jQ+TPB0GlPLOnSxw4jES6Lp6kYZQ8mZj2AcK+gkrho6pReBzsBIVswLP9lyeC+DaSj sx2PuntvpIZge3ETtHkAI2Chb1fKhfi8gRprtUn6BWeWPUjBOnforcRD2Lo02UqN49KcFuGPeF7 PpqR1NAWCFw3S3g== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 am65-cpsw can support up to 8 queues at Rx. Use a macro AM65_CPSW_MAX_RX_QUEUES to indicate that. As there is only one DMA channel for RX traffic, the 8 queues come as 8 flows in that channel. By default, we will start with 1 flow as defined by the macro AM65_CPSW_DEFAULT_RX_CHN_FLOWS. User can change the number of flows by ethtool like so 'ethtool -L ethx rx ' All traffic will still come on flow 0. To get traffic on different flows the Classifiers will need to be set up. Signed-off-by: Roger Quadros Reviewed-by: Simon Horman --- Changelog: v4: - Use single macro AM65_CPSW_MAX_QUEUES for both TX and RX queues to simplify code - reuse am65_cpsw_get/set_per_queue_coalesce for am65_cpsw_get/set_coalesce. - Return -EINVAL if unsupported tx/rx_coalesce_usecs in am65_cpsw_set_coalesce. - move am65_cpsw_nuss_remove_rx/tx_chns() to am65_cpsw_nuss_update_tx_rx_chns() - don't set skip_fdq during k3_udma_glue_reset_rx_chn(). Fixes breakage during ifdown/up. v3: - style fixes: reverse xmas tree and checkpatch.pl --max-line-length=80 - typo fix: Classifer -> Classifier - added Reviewed-by Simon Horman --- drivers/net/ethernet/ti/am65-cpsw-ethtool.c | 75 +++--- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 385 ++++++++++++++++------------ drivers/net/ethernet/ti/am65-cpsw-nuss.h | 39 +-- 3 files changed, 269 insertions(+), 230 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-ethtool.c b/drivers/net/ethernet/ti/am65-cpsw-ethtool.c index 539d5ca82f52..9032444435e9 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-ethtool.c +++ b/drivers/net/ethernet/ti/am65-cpsw-ethtool.c @@ -427,9 +427,9 @@ static void am65_cpsw_get_channels(struct net_device *ndev, { struct am65_cpsw_common *common = am65_ndev_to_common(ndev); - ch->max_rx = AM65_CPSW_MAX_RX_QUEUES; - ch->max_tx = AM65_CPSW_MAX_TX_QUEUES; - ch->rx_count = AM65_CPSW_MAX_RX_QUEUES; + ch->max_rx = AM65_CPSW_MAX_QUEUES; + ch->max_tx = AM65_CPSW_MAX_QUEUES; + ch->rx_count = common->rx_ch_num_flows; ch->tx_count = common->tx_ch_num; } @@ -447,9 +447,8 @@ static int am65_cpsw_set_channels(struct net_device *ndev, if (common->usage_count) return -EBUSY; - am65_cpsw_nuss_remove_tx_chns(common); - - return am65_cpsw_nuss_update_tx_chns(common, chs->tx_count); + return am65_cpsw_nuss_update_tx_rx_chns(common, chs->tx_count, + chs->rx_count); } static void @@ -913,80 +912,64 @@ static void am65_cpsw_get_mm_stats(struct net_device *ndev, s->MACMergeHoldCount = readl(base + AM65_CPSW_STATN_IET_TX_HOLD); } -static int am65_cpsw_get_coalesce(struct net_device *ndev, struct ethtool_coalesce *coal, - struct kernel_ethtool_coalesce *kernel_coal, - struct netlink_ext_ack *extack) -{ - struct am65_cpsw_common *common = am65_ndev_to_common(ndev); - struct am65_cpsw_tx_chn *tx_chn; - - tx_chn = &common->tx_chns[0]; - - coal->rx_coalesce_usecs = common->rx_pace_timeout / 1000; - coal->tx_coalesce_usecs = tx_chn->tx_pace_timeout / 1000; - - return 0; -} - static int am65_cpsw_get_per_queue_coalesce(struct net_device *ndev, u32 queue, struct ethtool_coalesce *coal) { struct am65_cpsw_common *common = am65_ndev_to_common(ndev); + struct am65_cpsw_rx_flow *rx_flow; struct am65_cpsw_tx_chn *tx_chn; - if (queue >= AM65_CPSW_MAX_TX_QUEUES) + if (queue >= AM65_CPSW_MAX_QUEUES) return -EINVAL; tx_chn = &common->tx_chns[queue]; - coal->tx_coalesce_usecs = tx_chn->tx_pace_timeout / 1000; + rx_flow = &common->rx_chns.flows[queue]; + coal->rx_coalesce_usecs = rx_flow->rx_pace_timeout / 1000; + return 0; } -static int am65_cpsw_set_coalesce(struct net_device *ndev, struct ethtool_coalesce *coal, +static int am65_cpsw_get_coalesce(struct net_device *ndev, struct ethtool_coalesce *coal, struct kernel_ethtool_coalesce *kernel_coal, struct netlink_ext_ack *extack) { - struct am65_cpsw_common *common = am65_ndev_to_common(ndev); - struct am65_cpsw_tx_chn *tx_chn; - - tx_chn = &common->tx_chns[0]; - - if (coal->rx_coalesce_usecs && coal->rx_coalesce_usecs < 20) - return -EINVAL; - - if (coal->tx_coalesce_usecs && coal->tx_coalesce_usecs < 20) - return -EINVAL; - - common->rx_pace_timeout = coal->rx_coalesce_usecs * 1000; - tx_chn->tx_pace_timeout = coal->tx_coalesce_usecs * 1000; - - return 0; + return am65_cpsw_get_per_queue_coalesce(ndev, 0, coal); } static int am65_cpsw_set_per_queue_coalesce(struct net_device *ndev, u32 queue, struct ethtool_coalesce *coal) { struct am65_cpsw_common *common = am65_ndev_to_common(ndev); + struct am65_cpsw_rx_flow *rx_flow; struct am65_cpsw_tx_chn *tx_chn; - if (queue >= AM65_CPSW_MAX_TX_QUEUES) + if (queue >= AM65_CPSW_MAX_QUEUES) return -EINVAL; tx_chn = &common->tx_chns[queue]; - - if (coal->tx_coalesce_usecs && coal->tx_coalesce_usecs < 20) { - dev_info(common->dev, "defaulting to min value of 20us for tx-usecs for tx-%u\n", - queue); - coal->tx_coalesce_usecs = 20; - } + if (coal->tx_coalesce_usecs && coal->tx_coalesce_usecs < 20) + return -EINVAL; tx_chn->tx_pace_timeout = coal->tx_coalesce_usecs * 1000; + rx_flow = &common->rx_chns.flows[queue]; + if (coal->rx_coalesce_usecs && coal->rx_coalesce_usecs < 20) + return -EINVAL; + + rx_flow->rx_pace_timeout = coal->rx_coalesce_usecs * 1000; + return 0; } +static int am65_cpsw_set_coalesce(struct net_device *ndev, struct ethtool_coalesce *coal, + struct kernel_ethtool_coalesce *kernel_coal, + struct netlink_ext_ack *extack) +{ + return am65_cpsw_set_per_queue_coalesce(ndev, 0, coal); +} + const struct ethtool_ops am65_cpsw_ethtool_ops_slave = { .begin = am65_cpsw_ethtool_op_begin, .complete = am65_cpsw_ethtool_op_complete, diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index b7e5d0fb5d19..76e62351b30b 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -138,7 +138,7 @@ AM65_CPSW_PN_TS_CTL_RX_ANX_F_EN) #define AM65_CPSW_ALE_AGEOUT_DEFAULT 30 -/* Number of TX/RX descriptors */ +/* Number of TX/RX descriptors per channel/flow */ #define AM65_CPSW_MAX_TX_DESC 500 #define AM65_CPSW_MAX_RX_DESC 500 @@ -150,6 +150,7 @@ NETIF_MSG_RX_ERR | NETIF_MSG_TX_ERR) #define AM65_CPSW_DEFAULT_TX_CHNS 8 +#define AM65_CPSW_DEFAULT_RX_CHN_FLOWS 1 /* CPPI streaming packet interface */ #define AM65_CPSW_CPPI_TX_FLOW_ID 0x3FFF @@ -331,7 +332,7 @@ static void am65_cpsw_nuss_ndo_host_tx_timeout(struct net_device *ndev, } static int am65_cpsw_nuss_rx_push(struct am65_cpsw_common *common, - struct page *page) + struct page *page, u32 flow_idx) { struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; struct cppi5_host_desc_t *desc_rx; @@ -364,7 +365,8 @@ static int am65_cpsw_nuss_rx_push(struct am65_cpsw_common *common, swdata = cppi5_hdesc_get_swdata(desc_rx); *((void **)swdata) = page_address(page); - return k3_udma_glue_push_rx_chn(rx_chn->rx_chn, 0, desc_rx, desc_dma); + return k3_udma_glue_push_rx_chn(rx_chn->rx_chn, flow_idx, + desc_rx, desc_dma); } void am65_cpsw_nuss_set_p0_ptype(struct am65_cpsw_common *common) @@ -399,22 +401,27 @@ static void am65_cpsw_init_port_emac_ale(struct am65_cpsw_port *port); static void am65_cpsw_destroy_xdp_rxqs(struct am65_cpsw_common *common) { struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; + struct am65_cpsw_rx_flow *flow; struct xdp_rxq_info *rxq; - int i; + int id, port; - for (i = 0; i < common->port_num; i++) { - if (!common->ports[i].ndev) - continue; + for (id = 0; id < common->rx_ch_num_flows; id++) { + flow = &rx_chn->flows[id]; - rxq = &common->ports[i].xdp_rxq; + for (port = 0; port < common->port_num; port++) { + if (!common->ports[port].ndev) + continue; - if (xdp_rxq_info_is_reg(rxq)) - xdp_rxq_info_unreg(rxq); - } + rxq = &common->ports[port].xdp_rxq[id]; + + if (xdp_rxq_info_is_reg(rxq)) + xdp_rxq_info_unreg(rxq); + } - if (rx_chn->page_pool) { - page_pool_destroy(rx_chn->page_pool); - rx_chn->page_pool = NULL; + if (flow->page_pool) { + page_pool_destroy(flow->page_pool); + flow->page_pool = NULL; + } } } @@ -428,31 +435,44 @@ static int am65_cpsw_create_xdp_rxqs(struct am65_cpsw_common *common) .nid = dev_to_node(common->dev), .dev = common->dev, .dma_dir = DMA_BIDIRECTIONAL, - .napi = &common->napi_rx, + /* .napi set dynamically */ }; + struct am65_cpsw_rx_flow *flow; struct xdp_rxq_info *rxq; struct page_pool *pool; - int i, ret; - - pool = page_pool_create(&pp_params); - if (IS_ERR(pool)) - return PTR_ERR(pool); + int id, port, ret; + + for (id = 0; id < common->rx_ch_num_flows; id++) { + flow = &rx_chn->flows[id]; + pp_params.napi = &flow->napi_rx; + pool = page_pool_create(&pp_params); + if (IS_ERR(pool)) { + ret = PTR_ERR(pool); + goto err; + } - rx_chn->page_pool = pool; + flow->page_pool = pool; - for (i = 0; i < common->port_num; i++) { - if (!common->ports[i].ndev) - continue; + /* using same page pool is allowed as no running rx handlers + * simultaneously for both ndevs + */ + for (port = 0; port < common->port_num; port++) { + if (!common->ports[port].ndev) + continue; - rxq = &common->ports[i].xdp_rxq; + rxq = &common->ports[port].xdp_rxq[id]; - ret = xdp_rxq_info_reg(rxq, common->ports[i].ndev, i, 0); - if (ret) - goto err; + ret = xdp_rxq_info_reg(rxq, common->ports[port].ndev, + id, flow->napi_rx.napi_id); + if (ret) + goto err; - ret = xdp_rxq_info_reg_mem_model(rxq, MEM_TYPE_PAGE_POOL, pool); - if (ret) - goto err; + ret = xdp_rxq_info_reg_mem_model(rxq, + MEM_TYPE_PAGE_POOL, + pool); + if (ret) + goto err; + } } return 0; @@ -497,25 +517,27 @@ static enum am65_cpsw_tx_buf_type am65_cpsw_nuss_buf_type(struct am65_cpsw_tx_ch desc_idx); } -static inline void am65_cpsw_put_page(struct am65_cpsw_rx_chn *rx_chn, +static inline void am65_cpsw_put_page(struct am65_cpsw_rx_flow *flow, struct page *page, bool allow_direct, int desc_idx) { - page_pool_put_full_page(rx_chn->page_pool, page, allow_direct); - rx_chn->pages[desc_idx] = NULL; + page_pool_put_full_page(flow->page_pool, page, allow_direct); + flow->pages[desc_idx] = NULL; } static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma) { - struct am65_cpsw_rx_chn *rx_chn = data; + struct am65_cpsw_rx_flow *flow = data; struct cppi5_host_desc_t *desc_rx; + struct am65_cpsw_rx_chn *rx_chn; dma_addr_t buf_dma; u32 buf_dma_len; void *page_addr; void **swdata; int desc_idx; + rx_chn = &flow->common->rx_chns; desc_rx = k3_cppi_desc_pool_dma2virt(rx_chn->desc_pool, desc_dma); swdata = cppi5_hdesc_get_swdata(desc_rx); page_addr = *swdata; @@ -526,7 +548,7 @@ static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma) desc_idx = am65_cpsw_nuss_desc_idx(rx_chn->desc_pool, desc_rx, rx_chn->dsize_log2); - am65_cpsw_put_page(rx_chn, virt_to_page(page_addr), false, desc_idx); + am65_cpsw_put_page(flow, virt_to_page(page_addr), false, desc_idx); } static void am65_cpsw_nuss_xmit_free(struct am65_cpsw_tx_chn *tx_chn, @@ -602,7 +624,8 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) struct am65_cpsw_host *host_p = am65_common_get_host(common); struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; struct am65_cpsw_tx_chn *tx_chn = common->tx_chns; - int port_idx, i, ret, tx; + int port_idx, i, ret, tx, flow_idx; + struct am65_cpsw_rx_flow *flow; u32 val, port_mask; struct page *page; @@ -670,27 +693,26 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) return ret; } - for (i = 0; i < rx_chn->descs_num; i++) { - page = page_pool_dev_alloc_pages(rx_chn->page_pool); - if (!page) { - ret = -ENOMEM; - if (i) + for (flow_idx = 0; flow_idx < common->rx_ch_num_flows; flow_idx++) { + flow = &rx_chn->flows[flow_idx]; + for (i = 0; i < AM65_CPSW_MAX_RX_DESC; i++) { + page = page_pool_dev_alloc_pages(flow->page_pool); + if (!page) { + dev_err(common->dev, "cannot allocate page in flow %d\n", + flow_idx); + ret = -ENOMEM; goto fail_rx; + } + flow->pages[i] = page; - return ret; - } - rx_chn->pages[i] = page; - - ret = am65_cpsw_nuss_rx_push(common, page); - if (ret < 0) { - dev_err(common->dev, - "cannot submit page to channel rx: %d\n", - ret); - am65_cpsw_put_page(rx_chn, page, false, i); - if (i) + ret = am65_cpsw_nuss_rx_push(common, page, flow_idx); + if (ret < 0) { + dev_err(common->dev, + "cannot submit page to rx channel flow %d, error %d\n", + flow_idx, ret); + am65_cpsw_put_page(flow, page, false, i); goto fail_rx; - - return ret; + } } } @@ -700,6 +722,14 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) goto fail_rx; } + for (i = 0; i < common->rx_ch_num_flows ; i++) { + napi_enable(&rx_chn->flows[i].napi_rx); + if (rx_chn->flows[i].irq_disabled) { + rx_chn->flows[i].irq_disabled = false; + enable_irq(rx_chn->flows[i].irq); + } + } + for (tx = 0; tx < common->tx_ch_num; tx++) { ret = k3_udma_glue_enable_tx_chn(tx_chn[tx].tx_chn); if (ret) { @@ -711,12 +741,6 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) napi_enable(&tx_chn[tx].napi_tx); } - napi_enable(&common->napi_rx); - if (common->rx_irq_disabled) { - common->rx_irq_disabled = false; - enable_irq(rx_chn->irq); - } - dev_dbg(common->dev, "cpsw_nuss started\n"); return 0; @@ -727,11 +751,24 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) tx--; } + for (flow_idx = 0; i < common->rx_ch_num_flows; flow_idx++) { + flow = &rx_chn->flows[flow_idx]; + if (!flow->irq_disabled) { + disable_irq(flow->irq); + flow->irq_disabled = true; + } + napi_disable(&flow->napi_rx); + } + k3_udma_glue_disable_rx_chn(rx_chn->rx_chn); fail_rx: - k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, 0, rx_chn, - am65_cpsw_nuss_rx_cleanup, 0); + for (i = 0; i < common->rx_ch_num_flows; i--) + k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, i, &rx_chn->flows[i], + am65_cpsw_nuss_rx_cleanup, 0); + + am65_cpsw_destroy_xdp_rxqs(common); + return ret; } @@ -780,12 +817,12 @@ static int am65_cpsw_nuss_common_stop(struct am65_cpsw_common *common) dev_err(common->dev, "rx teardown timeout\n"); } - napi_disable(&common->napi_rx); - hrtimer_cancel(&common->rx_hrtimer); - - for (i = 0; i < AM65_CPSW_MAX_RX_FLOWS; i++) - k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, i, rx_chn, - am65_cpsw_nuss_rx_cleanup, !!i); + for (i = 0; i < common->rx_ch_num_flows; i++) { + napi_disable(&rx_chn->flows[i].napi_rx); + hrtimer_cancel(&rx_chn->flows[i].rx_hrtimer); + k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, i, &rx_chn->flows[i], + am65_cpsw_nuss_rx_cleanup, 0); + } k3_udma_glue_disable_rx_chn(rx_chn->rx_chn); @@ -794,10 +831,6 @@ static int am65_cpsw_nuss_common_stop(struct am65_cpsw_common *common) writel(0, common->cpsw_base + AM65_CPSW_REG_CTL); writel(0, common->cpsw_base + AM65_CPSW_REG_STAT_PORT_EN); - for (i = 0; i < rx_chn->descs_num; i++) { - if (rx_chn->pages[i]) - am65_cpsw_put_page(rx_chn, rx_chn->pages[i], false, i); - } am65_cpsw_destroy_xdp_rxqs(common); dev_dbg(common->dev, "cpsw_nuss stopped\n"); @@ -868,7 +901,7 @@ static int am65_cpsw_nuss_ndo_slave_open(struct net_device *ndev) goto runtime_put; } - ret = netif_set_real_num_rx_queues(ndev, AM65_CPSW_MAX_RX_QUEUES); + ret = netif_set_real_num_rx_queues(ndev, common->rx_ch_num_flows); if (ret) { dev_err(common->dev, "cannot set real number of rx queues\n"); goto runtime_put; @@ -992,12 +1025,12 @@ static int am65_cpsw_xdp_tx_frame(struct net_device *ndev, return ret; } -static int am65_cpsw_run_xdp(struct am65_cpsw_common *common, +static int am65_cpsw_run_xdp(struct am65_cpsw_rx_flow *flow, struct am65_cpsw_port *port, struct xdp_buff *xdp, int desc_idx, int cpu, int *len) { - struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; + struct am65_cpsw_common *common = flow->common; struct am65_cpsw_ndev_priv *ndev_priv; struct net_device *ndev = port->ndev; struct am65_cpsw_ndev_stats *stats; @@ -1026,7 +1059,7 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common, ret = AM65_CPSW_XDP_PASS; goto out; case XDP_TX: - tx_chn = &common->tx_chns[cpu % AM65_CPSW_MAX_TX_QUEUES]; + tx_chn = &common->tx_chns[cpu % AM65_CPSW_MAX_QUEUES]; netif_txq = netdev_get_tx_queue(ndev, tx_chn->id); xdpf = xdp_convert_buff_to_frame(xdp); @@ -1068,7 +1101,8 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common, } page = virt_to_head_page(xdp->data); - am65_cpsw_put_page(rx_chn, page, true, desc_idx); + am65_cpsw_put_page(flow, page, true, desc_idx); + out: return ret; } @@ -1106,11 +1140,12 @@ static void am65_cpsw_nuss_rx_csum(struct sk_buff *skb, u32 csum_info) } } -static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common, - u32 flow_idx, int cpu, int *xdp_state) +static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_rx_flow *flow, + int cpu, int *xdp_state) { - struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; + struct am65_cpsw_rx_chn *rx_chn = &flow->common->rx_chns; u32 buf_dma_len, pkt_len, port_id = 0, csum_info; + struct am65_cpsw_common *common = flow->common; struct am65_cpsw_ndev_priv *ndev_priv; struct am65_cpsw_ndev_stats *stats; struct cppi5_host_desc_t *desc_rx; @@ -1120,6 +1155,7 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common, struct am65_cpsw_port *port; int headroom, desc_idx, ret; struct net_device *ndev; + u32 flow_idx = flow->id; struct sk_buff *skb; struct xdp_buff xdp; void *page_addr; @@ -1174,10 +1210,10 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common, } if (port->xdp_prog) { - xdp_init_buff(&xdp, PAGE_SIZE, &port->xdp_rxq); + xdp_init_buff(&xdp, PAGE_SIZE, &port->xdp_rxq[flow->id]); xdp_prepare_buff(&xdp, page_addr, AM65_CPSW_HEADROOM, pkt_len, false); - *xdp_state = am65_cpsw_run_xdp(common, port, &xdp, desc_idx, + *xdp_state = am65_cpsw_run_xdp(flow, port, &xdp, desc_idx, cpu, &pkt_len); if (*xdp_state != AM65_CPSW_XDP_PASS) goto allocate; @@ -1195,7 +1231,7 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common, skb_mark_for_recycle(skb); skb->protocol = eth_type_trans(skb, ndev); am65_cpsw_nuss_rx_csum(skb, csum_info); - napi_gro_receive(&common->napi_rx, skb); + napi_gro_receive(&flow->napi_rx, skb); stats = this_cpu_ptr(ndev_priv->stats); @@ -1205,24 +1241,24 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common, u64_stats_update_end(&stats->syncp); allocate: - new_page = page_pool_dev_alloc_pages(rx_chn->page_pool); + new_page = page_pool_dev_alloc_pages(flow->page_pool); if (unlikely(!new_page)) { dev_err(dev, "page alloc failed\n"); return -ENOMEM; } - rx_chn->pages[desc_idx] = new_page; + flow->pages[desc_idx] = new_page; if (netif_dormant(ndev)) { - am65_cpsw_put_page(rx_chn, new_page, true, desc_idx); + am65_cpsw_put_page(flow, new_page, true, desc_idx); ndev->stats.rx_dropped++; return 0; } requeue: - ret = am65_cpsw_nuss_rx_push(common, new_page); + ret = am65_cpsw_nuss_rx_push(common, new_page, flow_idx); if (WARN_ON(ret < 0)) { - am65_cpsw_put_page(rx_chn, new_page, true, desc_idx); + am65_cpsw_put_page(flow, new_page, true, desc_idx); ndev->stats.rx_errors++; ndev->stats.rx_dropped++; } @@ -1232,38 +1268,32 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common, static enum hrtimer_restart am65_cpsw_nuss_rx_timer_callback(struct hrtimer *timer) { - struct am65_cpsw_common *common = - container_of(timer, struct am65_cpsw_common, rx_hrtimer); + struct am65_cpsw_rx_flow *flow = container_of(timer, + struct am65_cpsw_rx_flow, + rx_hrtimer); - enable_irq(common->rx_chns.irq); + enable_irq(flow->irq); return HRTIMER_NORESTART; } static int am65_cpsw_nuss_rx_poll(struct napi_struct *napi_rx, int budget) { - struct am65_cpsw_common *common = am65_cpsw_napi_to_common(napi_rx); - int flow = AM65_CPSW_MAX_RX_FLOWS; + struct am65_cpsw_rx_flow *flow = am65_cpsw_napi_to_rx_flow(napi_rx); + struct am65_cpsw_common *common = flow->common; int cpu = smp_processor_id(); int xdp_state_or = 0; int cur_budget, ret; int xdp_state; int num_rx = 0; - /* process every flow */ - while (flow--) { - cur_budget = budget - num_rx; - - while (cur_budget--) { - ret = am65_cpsw_nuss_rx_packets(common, flow, cpu, - &xdp_state); - xdp_state_or |= xdp_state; - if (ret) - break; - num_rx++; - } - - if (num_rx >= budget) + /* process only this flow */ + cur_budget = budget; + while (cur_budget--) { + ret = am65_cpsw_nuss_rx_packets(flow, cpu, &xdp_state); + xdp_state_or |= xdp_state; + if (ret) break; + num_rx++; } if (xdp_state_or & AM65_CPSW_XDP_REDIRECT) @@ -1272,14 +1302,14 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *napi_rx, int budget) dev_dbg(common->dev, "%s num_rx:%d %d\n", __func__, num_rx, budget); if (num_rx < budget && napi_complete_done(napi_rx, num_rx)) { - if (common->rx_irq_disabled) { - common->rx_irq_disabled = false; - if (unlikely(common->rx_pace_timeout)) { - hrtimer_start(&common->rx_hrtimer, - ns_to_ktime(common->rx_pace_timeout), + if (flow->irq_disabled) { + flow->irq_disabled = false; + if (unlikely(flow->rx_pace_timeout)) { + hrtimer_start(&flow->rx_hrtimer, + ns_to_ktime(flow->rx_pace_timeout), HRTIMER_MODE_REL_PINNED); } else { - enable_irq(common->rx_chns.irq); + enable_irq(flow->irq); } } } @@ -1527,11 +1557,11 @@ static int am65_cpsw_nuss_tx_poll(struct napi_struct *napi_tx, int budget) static irqreturn_t am65_cpsw_nuss_rx_irq(int irq, void *dev_id) { - struct am65_cpsw_common *common = dev_id; + struct am65_cpsw_rx_flow *flow = dev_id; - common->rx_irq_disabled = true; + flow->irq_disabled = true; disable_irq_nosync(irq); - napi_schedule(&common->napi_rx); + napi_schedule(&flow->napi_rx); return IRQ_HANDLED; } @@ -2176,7 +2206,7 @@ static void am65_cpsw_nuss_free_tx_chns(void *data) } } -void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common) +static void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common) { struct device *dev = common->dev; int i; @@ -2191,15 +2221,9 @@ void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common) devm_free_irq(dev, tx_chn->irq, tx_chn); netif_napi_del(&tx_chn->napi_tx); - - if (!IS_ERR_OR_NULL(tx_chn->desc_pool)) - k3_cppi_desc_pool_destroy(tx_chn->desc_pool); - - if (!IS_ERR_OR_NULL(tx_chn->tx_chn)) - k3_udma_glue_release_tx_chn(tx_chn->tx_chn); - - memset(tx_chn, 0, sizeof(*tx_chn)); } + + am65_cpsw_nuss_free_tx_chns(common); } static int am65_cpsw_nuss_ndev_add_tx_napi(struct am65_cpsw_common *common) @@ -2331,19 +2355,22 @@ static void am65_cpsw_nuss_free_rx_chns(void *data) k3_udma_glue_release_rx_chn(rx_chn->rx_chn); } -static void am65_cpsw_nuss_remove_rx_chns(void *data) +static void am65_cpsw_nuss_remove_rx_chns(struct am65_cpsw_common *common) { - struct am65_cpsw_common *common = data; struct device *dev = common->dev; struct am65_cpsw_rx_chn *rx_chn; + struct am65_cpsw_rx_flow *flows; + int i; rx_chn = &common->rx_chns; + flows = rx_chn->flows; devm_remove_action(dev, am65_cpsw_nuss_free_rx_chns, common); - if (!(rx_chn->irq < 0)) - devm_free_irq(dev, rx_chn->irq, common); - - netif_napi_del(&common->napi_rx); + for (i = 0; i < common->rx_ch_num_flows; i++) { + if (!(flows[i].irq < 0)) + devm_free_irq(dev, flows[i].irq, &flows[i]); + netif_napi_del(&flows[i].napi_rx); + } am65_cpsw_nuss_free_rx_chns(common); @@ -2356,6 +2383,7 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common) struct k3_udma_glue_rx_channel_cfg rx_cfg = { 0 }; u32 max_desc_num = AM65_CPSW_MAX_RX_DESC; struct device *dev = common->dev; + struct am65_cpsw_rx_flow *flow; u32 hdesc_size, hdesc_size_out; u32 fdqring_id; int i, ret = 0; @@ -2364,12 +2392,21 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common) AM65_CPSW_NAV_SW_DATA_SIZE); rx_cfg.swdata_size = AM65_CPSW_NAV_SW_DATA_SIZE; - rx_cfg.flow_id_num = AM65_CPSW_MAX_RX_FLOWS; + rx_cfg.flow_id_num = common->rx_ch_num_flows; rx_cfg.flow_id_base = common->rx_flow_id_base; /* init all flows */ rx_chn->dev = dev; - rx_chn->descs_num = max_desc_num; + rx_chn->descs_num = max_desc_num * rx_cfg.flow_id_num; + + for (i = 0; i < common->rx_ch_num_flows; i++) { + flow = &rx_chn->flows[i]; + flow->page_pool = NULL; + flow->pages = devm_kcalloc(dev, AM65_CPSW_MAX_RX_DESC, + sizeof(*flow->pages), GFP_KERNEL); + if (!flow->pages) + return -ENOMEM; + } rx_chn->rx_chn = k3_udma_glue_request_rx_chn(dev, "rx", &rx_cfg); if (IS_ERR(rx_chn->rx_chn)) { @@ -2392,13 +2429,6 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common) rx_chn->dsize_log2 = __fls(hdesc_size_out); WARN_ON(hdesc_size_out != (1 << rx_chn->dsize_log2)); - rx_chn->page_pool = NULL; - - rx_chn->pages = devm_kcalloc(dev, rx_chn->descs_num, - sizeof(*rx_chn->pages), GFP_KERNEL); - if (!rx_chn->pages) - return -ENOMEM; - common->rx_flow_id_base = k3_udma_glue_rx_get_flow_id_base(rx_chn->rx_chn); dev_info(dev, "set new flow-id-base %u\n", common->rx_flow_id_base); @@ -2422,6 +2452,10 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common) K3_UDMA_GLUE_SRC_TAG_LO_USE_REMOTE_SRC_TAG, }; + flow = &rx_chn->flows[i]; + flow->id = i; + flow->common = common; + rx_flow_cfg.ring_rxfdq0_id = fdqring_id; rx_flow_cfg.rx_cfg.size = max_desc_num; rx_flow_cfg.rxfdq_cfg.size = max_desc_num; @@ -2438,28 +2472,32 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common) k3_udma_glue_rx_flow_get_fdq_id(rx_chn->rx_chn, i); - rx_chn->irq = k3_udma_glue_rx_get_irq(rx_chn->rx_chn, i); - - if (rx_chn->irq < 0) { + flow->irq = k3_udma_glue_rx_get_irq(rx_chn->rx_chn, i); + if (flow->irq <= 0) { dev_err(dev, "Failed to get rx dma irq %d\n", - rx_chn->irq); - ret = rx_chn->irq; + flow->irq); + ret = flow->irq; goto err; } - } - - netif_napi_add(common->dma_ndev, &common->napi_rx, - am65_cpsw_nuss_rx_poll); - hrtimer_init(&common->rx_hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED); - common->rx_hrtimer.function = &am65_cpsw_nuss_rx_timer_callback; - ret = devm_request_irq(dev, rx_chn->irq, - am65_cpsw_nuss_rx_irq, - IRQF_TRIGGER_HIGH, dev_name(dev), common); - if (ret) { - dev_err(dev, "failure requesting rx irq %u, %d\n", - rx_chn->irq, ret); - goto err; + snprintf(flow->name, + sizeof(flow->name), "%s-rx%d", + dev_name(dev), i); + netif_napi_add(common->dma_ndev, &flow->napi_rx, + am65_cpsw_nuss_rx_poll); + hrtimer_init(&flow->rx_hrtimer, CLOCK_MONOTONIC, + HRTIMER_MODE_REL_PINNED); + flow->rx_hrtimer.function = &am65_cpsw_nuss_rx_timer_callback; + + ret = devm_request_irq(dev, flow->irq, + am65_cpsw_nuss_rx_irq, + IRQF_TRIGGER_HIGH, + flow->name, flow); + if (ret) { + dev_err(dev, "failure requesting rx %d irq %u, %d\n", + i, flow->irq, ret); + goto err; + } } err: @@ -2705,8 +2743,8 @@ am65_cpsw_nuss_init_port_ndev(struct am65_cpsw_common *common, u32 port_idx) /* alloc netdev */ port->ndev = devm_alloc_etherdev_mqs(common->dev, sizeof(struct am65_cpsw_ndev_priv), - AM65_CPSW_MAX_TX_QUEUES, - AM65_CPSW_MAX_RX_QUEUES); + AM65_CPSW_MAX_QUEUES, + AM65_CPSW_MAX_QUEUES); if (!port->ndev) { dev_err(dev, "error allocating slave net_device %u\n", port->port_id); @@ -3303,9 +3341,10 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common) k3_udma_glue_disable_tx_chn(tx_chan[i].tx_chn); } - for (i = 0; i < AM65_CPSW_MAX_RX_FLOWS; i++) - k3_udma_glue_reset_rx_chn(rx_chan->rx_chn, i, rx_chan, - am65_cpsw_nuss_rx_cleanup, !!i); + for (i = 0; i < common->rx_ch_num_flows; i++) + k3_udma_glue_reset_rx_chn(rx_chan->rx_chn, i, + &rx_chan->flows[i], + am65_cpsw_nuss_rx_cleanup, 0); k3_udma_glue_disable_rx_chn(rx_chan->rx_chn); @@ -3346,12 +3385,21 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common) return ret; } -int am65_cpsw_nuss_update_tx_chns(struct am65_cpsw_common *common, int num_tx) +int am65_cpsw_nuss_update_tx_rx_chns(struct am65_cpsw_common *common, + int num_tx, int num_rx) { int ret; + am65_cpsw_nuss_remove_tx_chns(common); + am65_cpsw_nuss_remove_rx_chns(common); + common->tx_ch_num = num_tx; + common->rx_ch_num_flows = num_rx; ret = am65_cpsw_nuss_init_tx_chns(common); + if (ret) + return ret; + + ret = am65_cpsw_nuss_init_rx_chns(common); return ret; } @@ -3481,6 +3529,7 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev) common->rx_flow_id_base = -1; init_completion(&common->tdown_complete); common->tx_ch_num = AM65_CPSW_DEFAULT_TX_CHNS; + common->rx_ch_num_flows = AM65_CPSW_DEFAULT_RX_CHN_FLOWS; common->pf_p0_rx_ptype_rrobin = false; common->default_vlan = 1; @@ -3672,8 +3721,10 @@ static int am65_cpsw_nuss_resume(struct device *dev) return ret; /* If RX IRQ was disabled before suspend, keep it disabled */ - if (common->rx_irq_disabled) - disable_irq(common->rx_chns.irq); + for (i = 0; i < common->rx_ch_num_flows; i++) { + if (common->rx_chns.flows[i].irq_disabled) + disable_irq(common->rx_chns.flows[i].irq); + } am65_cpts_resume(common->cpts); diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/ethernet/ti/am65-cpsw-nuss.h index e2ce2be320bd..dc8d544230dc 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h @@ -21,9 +21,7 @@ struct am65_cpts; #define HOST_PORT_NUM 0 -#define AM65_CPSW_MAX_TX_QUEUES 8 -#define AM65_CPSW_MAX_RX_QUEUES 1 -#define AM65_CPSW_MAX_RX_FLOWS 1 +#define AM65_CPSW_MAX_QUEUES 8 /* both TX & RX */ #define AM65_CPSW_PORT_VLAN_REG_OFFSET 0x014 @@ -58,7 +56,7 @@ struct am65_cpsw_port { struct am65_cpsw_qos qos; struct devlink_port devlink_port; struct bpf_prog *xdp_prog; - struct xdp_rxq_info xdp_rxq; + struct xdp_rxq_info xdp_rxq[AM65_CPSW_MAX_QUEUES]; /* Only for suspend resume context */ u32 vid_context; }; @@ -94,16 +92,27 @@ struct am65_cpsw_tx_chn { u32 rate_mbps; }; +struct am65_cpsw_rx_flow { + u32 id; + struct napi_struct napi_rx; + struct am65_cpsw_common *common; + int irq; + bool irq_disabled; + struct hrtimer rx_hrtimer; + unsigned long rx_pace_timeout; + struct page_pool *page_pool; + struct page **pages; + char name[32]; +}; + struct am65_cpsw_rx_chn { struct device *dev; struct device *dma_dev; struct k3_cppi_desc_pool *desc_pool; struct k3_udma_glue_rx_channel *rx_chn; - struct page_pool *page_pool; - struct page **pages; u32 descs_num; unsigned char dsize_log2; - int irq; + struct am65_cpsw_rx_flow flows[AM65_CPSW_MAX_QUEUES]; }; #define AM65_CPSW_QUIRK_I2027_NO_TX_CSUM BIT(0) @@ -145,16 +154,12 @@ struct am65_cpsw_common { u32 tx_ch_rate_msk; u32 rx_flow_id_base; - struct am65_cpsw_tx_chn tx_chns[AM65_CPSW_MAX_TX_QUEUES]; + struct am65_cpsw_tx_chn tx_chns[AM65_CPSW_MAX_QUEUES]; struct completion tdown_complete; atomic_t tdown_cnt; + int rx_ch_num_flows; struct am65_cpsw_rx_chn rx_chns; - struct napi_struct napi_rx; - - bool rx_irq_disabled; - struct hrtimer rx_hrtimer; - unsigned long rx_pace_timeout; u32 nuss_ver; u32 cpsw_ver; @@ -203,8 +208,8 @@ struct am65_cpsw_ndev_priv { #define am65_common_get_host(common) (&(common)->host) #define am65_common_get_port(common, id) (&(common)->ports[(id) - 1]) -#define am65_cpsw_napi_to_common(pnapi) \ - container_of(pnapi, struct am65_cpsw_common, napi_rx) +#define am65_cpsw_napi_to_rx_flow(pnapi) \ + container_of(pnapi, struct am65_cpsw_rx_flow, napi_rx) #define am65_cpsw_napi_to_tx_chn(pnapi) \ container_of(pnapi, struct am65_cpsw_tx_chn, napi_tx) @@ -215,8 +220,8 @@ struct am65_cpsw_ndev_priv { extern const struct ethtool_ops am65_cpsw_ethtool_ops_slave; void am65_cpsw_nuss_set_p0_ptype(struct am65_cpsw_common *common); -void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common); -int am65_cpsw_nuss_update_tx_chns(struct am65_cpsw_common *common, int num_tx); +int am65_cpsw_nuss_update_tx_rx_chns(struct am65_cpsw_common *common, + int num_tx, int num_rx); bool am65_cpsw_port_dev_check(const struct net_device *dev); From patchwork Tue Sep 10 09:23:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 827604 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29A70188CB1; Tue, 10 Sep 2024 09:24:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725960261; cv=none; b=GCXWq7wfPDrfp5wXI8ViYZKrhtLv/zoJRO2pH9H/rS3KQnyGUD2d0pUU3BQL0MkBrJ2YYD2OHEPZJ1PB6M4WJ6jCRTsKwe37JoS8DgUUeBAqbHzrxwsPAsXp0Wsr9Iszx6jp8i6pf1sGK8hgo8BbL6VHB341cFY1okSCjbuvhpg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725960261; c=relaxed/simple; bh=PFNnLTD7CaYjH2DlEg+zMBJShzrODexUAt4TPrsptbQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=EM9mRL/Igm1CugVOnzw8fcJ0DXUCuAthn7nzE8VlyCJ47EsEoyerFmNpVpz4dtOA69hAsUCr9g5XXDLzTLt3rzfvzC+e6EplRQMCL0Cjt+pmmmzhIQTmw6qbXqW/4txnF3k3ip7h01GoTdQNAiAGJbt7NV5gyzb6AI6L6JL3ZEw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hW3FKaZp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hW3FKaZp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3F8EFC4CEC3; Tue, 10 Sep 2024 09:24:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1725960260; bh=PFNnLTD7CaYjH2DlEg+zMBJShzrODexUAt4TPrsptbQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=hW3FKaZpgo++eBc5l63SooSzpzi+fBfsrEIkNRoHuGaMWkzMWLaKxtWyf20kzv9pT umr7nCkjedE94iz3BSzybo1U/zaBqoKNxFNqgUXw4gd/iMWqScCfeVOU9PDWf77J+r ekoGWYLa/dC8YkOCWUaplx/Zl32m55lrQ+GXa+XKhRTeBn3h/FebW8oR88fZPRCGG9 flfghLjPLNywT8h6RezfVysZu5QFNOopa9ertEktrYU+GtDChGETN3uNVFA5yDrwud XUq734Z84bdZMWPx+kquiS8ysBK0wqgD3nxayAbyhzBIqxINfr+MvXRmrcDjBx6Pzt jLTpMyyKBQBxg== From: Roger Quadros Date: Tue, 10 Sep 2024 12:23:59 +0300 Subject: [PATCH net-next v4 2/6] net: ethernet: ti: cpsw_ale: use regfields for ALE registers Precedence: bulk X-Mailing-List: linux-omap@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240910-am65-cpsw-multi-rx-v4-2-077fa6403043@kernel.org> References: <20240910-am65-cpsw-multi-rx-v4-0-077fa6403043@kernel.org> In-Reply-To: <20240910-am65-cpsw-multi-rx-v4-0-077fa6403043@kernel.org> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Siddharth Vadapalli , Julien Panis , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Simon Horman , Andrew Lunn , Joe Damato , srk@ti.com, vigneshr@ti.com, danishanwar@ti.com, pekka Varis , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=7593; i=rogerq@kernel.org; h=from:subject:message-id; bh=PFNnLTD7CaYjH2DlEg+zMBJShzrODexUAt4TPrsptbQ=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBm4BA1lwzEKAYECIMoymT9xxBhpiThE42CrDoGc c3zbNGzLR6JAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZuAQNQAKCRDSWmvTvnYw k6SKEACngs+2gPgzaPQ+/ezVdfSVB15UCg51nJHpgTNpXIDKj0IXdSirD8yKgddKQHhglT8OTPd zjIz5zOjgZRKlMkd5ETbZttBrtBUb5pTZkBK4u+FaeuCQupcEAnT2vbInIIw3Trkb3MM+ffVW37 5fSgtTKq9GW7nOLQ/a6MbJMxOhFMCm2wfI2uEltYEPAzq1WmTCZVpo0xzhCf0Zld7pnkh1rAfAh Y/HzG5YCps5XaXYLWql1txYcKbx5edrVVIZgQU1fM4hHZ/Q2KbOXFBD8xL1YJqzGWxBnoDKWIXb 0iim+ujDZscKcgUORBaAnhmmhlJMJG2BB9Iu2wwG7HxR/er+Pn3ULnVq953bB1MfCX9vdsbVz2T QPN3dbUFPSBN0cKpR0k7NUbVE3p5IrE/qpnrs/Xw75uBmrU1Tf05qv5CwlqP83oTlkja0uU5IMZ 7fMmjJC8w+MTvWxBMgtFLlhpIKeuP+VRrag4J5MLHCXfheMmF0+3H83YRctctWUKq8qPMYC2P13 ywfk1/5q6H8b3C577iAXartY+DGqIKbTCVgsOJnH8NCcMkrNiWqdqfubsWvXfuHV7Cxey5yJBhb 8cWZXVWopsw4e5V8n9PlHX+oAGl9RwbAH3lv5Uvf4KqYVsbiL+6uYXak5t8KOSAPlAAp6gomCCN IN0474D7qx7VZ2A== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 Map the entire ALE registerspace using regmap. Add regfields for Major and Minor Version fields. Signed-off-by: Roger Quadros Reviewed-by: Simon Horman --- Changelog: v4: - reverse Xmas tree declaration order fixes v3: - added Reviewed-by Simon Horman --- drivers/net/ethernet/ti/cpsw_ale.c | 84 ++++++++++++++++++++++++++++++-------- drivers/net/ethernet/ti/cpsw_ale.h | 17 ++++++-- 2 files changed, 79 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c index 64bf22cd860c..979f741a231d 100644 --- a/drivers/net/ethernet/ti/cpsw_ale.c +++ b/drivers/net/ethernet/ti/cpsw_ale.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -76,7 +77,7 @@ enum { * @dev_id: ALE version/SoC id * @features: features supported by ALE * @tbl_entries: number of ALE entries - * @major_ver_mask: mask of ALE Major Version Value in ALE_IDVER reg. + * @reg_fields: pointer to array of register field configuration * @nu_switch_ale: NU Switch ALE * @vlan_entry_tbl: ALE vlan entry fields description tbl */ @@ -84,7 +85,7 @@ struct cpsw_ale_dev_id { const char *dev_id; u32 features; u32 tbl_entries; - u32 major_ver_mask; + const struct reg_field *reg_fields; bool nu_switch_ale; const struct ale_entry_fld *vlan_entry_tbl; }; @@ -1292,25 +1293,37 @@ void cpsw_ale_stop(struct cpsw_ale *ale) cpsw_ale_control_set(ale, 0, ALE_ENABLE, 0); } +static const struct reg_field ale_fields_cpsw[] = { + /* CPSW_ALE_IDVER_REG */ + [MINOR_VER] = REG_FIELD(ALE_IDVER, 0, 7), + [MAJOR_VER] = REG_FIELD(ALE_IDVER, 8, 15), +}; + +static const struct reg_field ale_fields_cpsw_nu[] = { + /* CPSW_ALE_IDVER_REG */ + [MINOR_VER] = REG_FIELD(ALE_IDVER, 0, 7), + [MAJOR_VER] = REG_FIELD(ALE_IDVER, 8, 10), +}; + static const struct cpsw_ale_dev_id cpsw_ale_id_match[] = { { /* am3/4/5, dra7. dm814x, 66ak2hk-gbe */ .dev_id = "cpsw", .tbl_entries = 1024, - .major_ver_mask = 0xff, + .reg_fields = ale_fields_cpsw, .vlan_entry_tbl = vlan_entry_cpsw, }, { /* 66ak2h_xgbe */ .dev_id = "66ak2h-xgbe", .tbl_entries = 2048, - .major_ver_mask = 0xff, + .reg_fields = ale_fields_cpsw, .vlan_entry_tbl = vlan_entry_cpsw, }, { .dev_id = "66ak2el", .features = CPSW_ALE_F_STATUS_REG, - .major_ver_mask = 0x7, + .reg_fields = ale_fields_cpsw_nu, .nu_switch_ale = true, .vlan_entry_tbl = vlan_entry_nu, }, @@ -1318,7 +1331,7 @@ static const struct cpsw_ale_dev_id cpsw_ale_id_match[] = { .dev_id = "66ak2g", .features = CPSW_ALE_F_STATUS_REG, .tbl_entries = 64, - .major_ver_mask = 0x7, + .reg_fields = ale_fields_cpsw_nu, .nu_switch_ale = true, .vlan_entry_tbl = vlan_entry_nu, }, @@ -1326,20 +1339,20 @@ static const struct cpsw_ale_dev_id cpsw_ale_id_match[] = { .dev_id = "am65x-cpsw2g", .features = CPSW_ALE_F_STATUS_REG | CPSW_ALE_F_HW_AUTOAGING, .tbl_entries = 64, - .major_ver_mask = 0x7, + .reg_fields = ale_fields_cpsw_nu, .nu_switch_ale = true, .vlan_entry_tbl = vlan_entry_nu, }, { .dev_id = "j721e-cpswxg", .features = CPSW_ALE_F_STATUS_REG | CPSW_ALE_F_HW_AUTOAGING, - .major_ver_mask = 0x7, + .reg_fields = ale_fields_cpsw_nu, .vlan_entry_tbl = vlan_entry_k3_cpswxg, }, { .dev_id = "am64-cpswxg", .features = CPSW_ALE_F_STATUS_REG | CPSW_ALE_F_HW_AUTOAGING, - .major_ver_mask = 0x7, + .reg_fields = ale_fields_cpsw_nu, .vlan_entry_tbl = vlan_entry_k3_cpswxg, .tbl_entries = 512, }, @@ -1361,41 +1374,76 @@ cpsw_ale_dev_id *cpsw_ale_match_id(const struct cpsw_ale_dev_id *id, return NULL; } +static const struct regmap_config ale_regmap_cfg = { + .reg_bits = 32, + .val_bits = 32, + .reg_stride = 4, + .name = "cpsw-ale", +}; + +static int cpsw_ale_regfield_init(struct cpsw_ale *ale) +{ + const struct reg_field *reg_fields = ale->params.reg_fields; + struct device *dev = ale->params.dev; + struct regmap *regmap = ale->regmap; + int i; + + for (i = 0; i < ALE_FIELDS_MAX; i++) { + ale->fields[i] = devm_regmap_field_alloc(dev, regmap, + reg_fields[i]); + if (IS_ERR(ale->fields[i])) { + dev_err(dev, "Unable to allocate regmap field %d\n", i); + return PTR_ERR(ale->fields[i]); + } + } + + return 0; +} + struct cpsw_ale *cpsw_ale_create(struct cpsw_ale_params *params) { const struct cpsw_ale_dev_id *ale_dev_id; + u32 ale_entries, rev_major, rev_minor; struct cpsw_ale *ale; - u32 rev, ale_entries; + int ret; ale_dev_id = cpsw_ale_match_id(cpsw_ale_id_match, params->dev_id); if (!ale_dev_id) return ERR_PTR(-EINVAL); params->ale_entries = ale_dev_id->tbl_entries; - params->major_ver_mask = ale_dev_id->major_ver_mask; params->nu_switch_ale = ale_dev_id->nu_switch_ale; + params->reg_fields = ale_dev_id->reg_fields; ale = devm_kzalloc(params->dev, sizeof(*ale), GFP_KERNEL); if (!ale) return ERR_PTR(-ENOMEM); + ale->regmap = devm_regmap_init_mmio(params->dev, params->ale_regs, + &ale_regmap_cfg); + if (IS_ERR(ale->regmap)) { + dev_err(params->dev, "Couldn't create CPSW ALE regmap\n"); + return ERR_PTR(-ENOMEM); + } + + ale->params = *params; + ret = cpsw_ale_regfield_init(ale); + if (ret) + return ERR_PTR(ret); ale->p0_untag_vid_mask = devm_bitmap_zalloc(params->dev, VLAN_N_VID, GFP_KERNEL); if (!ale->p0_untag_vid_mask) return ERR_PTR(-ENOMEM); - ale->params = *params; ale->ageout = ale->params.ale_ageout * HZ; ale->features = ale_dev_id->features; ale->vlan_entry_tbl = ale_dev_id->vlan_entry_tbl; - rev = readl_relaxed(ale->params.ale_regs + ALE_IDVER); - ale->version = - (ALE_VERSION_MAJOR(rev, ale->params.major_ver_mask) << 8) | - ALE_VERSION_MINOR(rev); + regmap_field_read(ale->fields[MINOR_VER], &rev_minor); + regmap_field_read(ale->fields[MAJOR_VER], &rev_major); + ale->version = rev_major << 8 | rev_minor; dev_info(ale->params.dev, "initialized cpsw ale version %d.%d\n", - ALE_VERSION_MAJOR(rev, ale->params.major_ver_mask), - ALE_VERSION_MINOR(rev)); + rev_major, rev_minor); if (ale->features & CPSW_ALE_F_STATUS_REG && !ale->params.ale_entries) { diff --git a/drivers/net/ethernet/ti/cpsw_ale.h b/drivers/net/ethernet/ti/cpsw_ale.h index 6779ee111d57..58d377dd7496 100644 --- a/drivers/net/ethernet/ti/cpsw_ale.h +++ b/drivers/net/ethernet/ti/cpsw_ale.h @@ -8,6 +8,8 @@ #ifndef __TI_CPSW_ALE_H__ #define __TI_CPSW_ALE_H__ +struct reg_fields; + struct cpsw_ale_params { struct device *dev; void __iomem *ale_regs; @@ -20,19 +22,26 @@ struct cpsw_ale_params { * to identify this hardware. */ bool nu_switch_ale; - /* mask bit used in NU Switch ALE is 3 bits instead of 8 bits. So - * pass it from caller. - */ - u32 major_ver_mask; + const struct reg_field *reg_fields; const char *dev_id; unsigned long bus_freq; }; struct ale_entry_fld; +struct regmap; + +enum ale_fields { + MINOR_VER, + MAJOR_VER, + /* terminator */ + ALE_FIELDS_MAX, +}; struct cpsw_ale { struct cpsw_ale_params params; struct timer_list timer; + struct regmap *regmap; + struct regmap_field *fields[ALE_FIELDS_MAX]; unsigned long ageout; u32 version; u32 features; From patchwork Tue Sep 10 09:24:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 827264 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9AD331862B8; Tue, 10 Sep 2024 09:24:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725960266; cv=none; b=YCofBGx81na14sUJ8pNIu2peYUHE9mEBI2uLdyITlfWWXus54A+OECGC9PtAw9qgNEqhWaM+ZEdvcpkDIcfy3M0PIIGpd4sVr+cD8WCFuR/QnWWRx78SopT2oxvRps9C1NBtkp/gWJDUzvah8z1p3PdzvNVSTWvZA2hmpwea9bY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725960266; c=relaxed/simple; bh=E1GSVGXmKxLAlK+LOwY9GuD3ZhcHQC6riVbOdVBvdug=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=M9SJfhKge5ttyw/0hFl5EMOslI+tEHEV3EuoyDLpMNozPUjocyCIY8u/k6JdR3IzAt/C1jpShdwtAAQ2uKKqbzBJ1NLDqM+wKKqUy8QnmbNcUkK0lFlJHk7h8BcCpnYIXzesuPTH65Zx6xvHx+P631fpyQCcHoc3nMnFO7b8SuE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=S1hFymu3; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="S1hFymu3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6688FC4CEC3; Tue, 10 Sep 2024 09:24:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1725960266; bh=E1GSVGXmKxLAlK+LOwY9GuD3ZhcHQC6riVbOdVBvdug=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=S1hFymu32omjgdlTbmPtJimo+ExIUSsIErxs4XKpZRTN6fF3MPfTc8cz8GjO2g1xr zHYC/pxOKQKMpzZvgaGohMRI5TFPYbaqcMPp0dH8nr8A1DH6A+nxj1CrXTHRnB4IKB d8QuqWN0WLGB0sEWBAB4fLj3pCx77+TN4an7fWb/1To7hQJzKN5mpuTD30TCHQW3t9 ajMPrqispr3js0iqDov6LANFc+bY7demUYnllCn1tK+exYAxsCFsiD9B1LrMex4CP3 4pua4HW2t6w1SP85E7sfAftz6GjBeVq1srGGuVFt/oXPUFoxV+5RX2TNMIHli/WrKX JxdyB5UJqipkQ== From: Roger Quadros Date: Tue, 10 Sep 2024 12:24:00 +0300 Subject: [PATCH net-next v4 3/6] net: ethernet: ti: cpsw_ale: use regfields for number of Entries and Policers Precedence: bulk X-Mailing-List: linux-omap@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240910-am65-cpsw-multi-rx-v4-3-077fa6403043@kernel.org> References: <20240910-am65-cpsw-multi-rx-v4-0-077fa6403043@kernel.org> In-Reply-To: <20240910-am65-cpsw-multi-rx-v4-0-077fa6403043@kernel.org> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Siddharth Vadapalli , Julien Panis , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Simon Horman , Andrew Lunn , Joe Damato , srk@ti.com, vigneshr@ti.com, danishanwar@ti.com, pekka Varis , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=3847; i=rogerq@kernel.org; h=from:subject:message-id; bh=E1GSVGXmKxLAlK+LOwY9GuD3ZhcHQC6riVbOdVBvdug=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBm4BA1NMK8q3vTnC7Jy6dtIRAB6vdgaRYyp77gd 0OOYEC3666JAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZuAQNQAKCRDSWmvTvnYw k9mED/9KsNqdtUIC2sZmXXP2Fo1hVMVB5qSk24APWblTth0yP/nKKH68zrUvfC71queB/KPvXWU Xowqe+3Qz00tUdShO675REZ0AUoh8JXIHYK2L8TTkgwoMOABr9uNIGr2bBvbSKC3fPzpfDg/1tA XvA/xq3CODEeMOcu3z/yAEQoavcDj9PxOieK6s9HYSE8E26mOrXS8vR9C1aw50IItrCfduOMcrr U3NEdmrKukyACA81eCEV7UEW8gnmZw0oXk/lGOQccm35DNPBfxekclTysvyC5rjO3IYAcfFSTD7 CDizUQiWsOmzELbcXNLUDlDIL2nT3cx8qrXEQHvVlRdDn/m87J/EpySRlAJmJnZUqqz7qh4DnMF sOXEj12FiyouQtuWNljrYYbOw5z9lbK2zdLHRKv9QuQfVbjwWveDHhyTEjHEo2Apf+gHVNnSp83 K0ifg3R4M3PAj1ALZ2Ec9v11AAC4yOFKNegXybw4MLcc68psdXeaZShjhftcYCQLyyGem05QT74 69LnGxcGCgNvjwpN/btnBXD2fLRRWC5bYnfckKELRoXADj6+tGUYnOdVi9NWdWL6YbevseuQ2ua 8A/VhWZcBWtuAwPawdgXz5fm0ao+rJhv/BrHaG0CDb8BEiwd3dKCyTjW0GdILsvIQoNIyL+iNob 8Edblvr+F0LCT+A== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 Use regfields for number of ALE Entries and Policers. The variants that support Policers/Classifiers have the number of policers encoded in the ALE_STATUS register. Use that and show the number of Policers in the ALE info message. Signed-off-by: Roger Quadros Reviewed-by: Simon Horman --- Changelog: v4: - reverse Xmas tree declaration order fixes v3: - added Reviewed-by Simon Horman --- drivers/net/ethernet/ti/cpsw_ale.c | 25 +++++++++++++++++++------ drivers/net/ethernet/ti/cpsw_ale.h | 3 +++ 2 files changed, 22 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c index 979f741a231d..9e45470b4eb9 100644 --- a/drivers/net/ethernet/ti/cpsw_ale.c +++ b/drivers/net/ethernet/ti/cpsw_ale.c @@ -103,7 +103,7 @@ struct cpsw_ale_dev_id { #define ALE_UCAST_TOUCHED 3 #define ALE_TABLE_SIZE_MULTIPLIER 1024 -#define ALE_STATUS_SIZE_MASK 0x1f +#define ALE_POLICER_SIZE_MULTIPLIER 8 static inline int cpsw_ale_get_field(u32 *ale_entry, u32 start, u32 bits) { @@ -1303,6 +1303,9 @@ static const struct reg_field ale_fields_cpsw_nu[] = { /* CPSW_ALE_IDVER_REG */ [MINOR_VER] = REG_FIELD(ALE_IDVER, 0, 7), [MAJOR_VER] = REG_FIELD(ALE_IDVER, 8, 10), + /* CPSW_ALE_STATUS_REG */ + [ALE_ENTRIES] = REG_FIELD(ALE_STATUS, 0, 7), + [ALE_POLICERS] = REG_FIELD(ALE_STATUS, 8, 15), }; static const struct cpsw_ale_dev_id cpsw_ale_id_match[] = { @@ -1402,8 +1405,8 @@ static int cpsw_ale_regfield_init(struct cpsw_ale *ale) struct cpsw_ale *cpsw_ale_create(struct cpsw_ale_params *params) { + u32 ale_entries, rev_major, rev_minor, policers; const struct cpsw_ale_dev_id *ale_dev_id; - u32 ale_entries, rev_major, rev_minor; struct cpsw_ale *ale; int ret; @@ -1447,9 +1450,7 @@ struct cpsw_ale *cpsw_ale_create(struct cpsw_ale_params *params) if (ale->features & CPSW_ALE_F_STATUS_REG && !ale->params.ale_entries) { - ale_entries = - readl_relaxed(ale->params.ale_regs + ALE_STATUS) & - ALE_STATUS_SIZE_MASK; + regmap_field_read(ale->fields[ALE_ENTRIES], &ale_entries); /* ALE available on newer NetCP switches has introduced * a register, ALE_STATUS, to indicate the size of ALE * table which shows the size as a multiple of 1024 entries. @@ -1463,8 +1464,20 @@ struct cpsw_ale *cpsw_ale_create(struct cpsw_ale_params *params) ale_entries *= ALE_TABLE_SIZE_MULTIPLIER; ale->params.ale_entries = ale_entries; } + + if (ale->features & CPSW_ALE_F_STATUS_REG && + !ale->params.num_policers) { + regmap_field_read(ale->fields[ALE_POLICERS], &policers); + if (!policers) + return ERR_PTR(-EINVAL); + + policers *= ALE_POLICER_SIZE_MULTIPLIER; + ale->params.num_policers = policers; + } + dev_info(ale->params.dev, - "ALE Table size %ld\n", ale->params.ale_entries); + "ALE Table size %ld, Policers %ld\n", ale->params.ale_entries, + ale->params.num_policers); /* set default bits for existing h/w */ ale->port_mask_bits = ale->params.ale_ports; diff --git a/drivers/net/ethernet/ti/cpsw_ale.h b/drivers/net/ethernet/ti/cpsw_ale.h index 58d377dd7496..e12bb2caf016 100644 --- a/drivers/net/ethernet/ti/cpsw_ale.h +++ b/drivers/net/ethernet/ti/cpsw_ale.h @@ -15,6 +15,7 @@ struct cpsw_ale_params { void __iomem *ale_regs; unsigned long ale_ageout; /* in secs */ unsigned long ale_entries; + unsigned long num_policers; unsigned long ale_ports; /* NU Switch has specific handling as number of bits in ALE entries * are different than other versions of ALE. Also there are specific @@ -33,6 +34,8 @@ struct regmap; enum ale_fields { MINOR_VER, MAJOR_VER, + ALE_ENTRIES, + ALE_POLICERS, /* terminator */ ALE_FIELDS_MAX, }; From patchwork Tue Sep 10 09:24:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 827603 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8AF9B1891A1; Tue, 10 Sep 2024 09:24:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725960271; cv=none; b=ZTAzYIyV8bvxwpfiTPWlO9cMO7mJlQdYU/gt3ELSXSrv4t05ZABrNp+Rkb3o3H9XA+jrWUyxmQP+1zNNnxugyl28mtqL0zyO9RHrv6T+cQSVt2fDGsdEXhIHiEAw6uydr/tKxQlSNJLk434MmyJCvbafTevESnYaST9B1GlbhZ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725960271; c=relaxed/simple; bh=5EGPy5Ci3JV09svqH6z3s9p0uaptNCwmvPw8SljHsN8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=na1S7GOruuU8lN5NXkLawIgzNo4015f8dmrHwOkFcKCSjawsAdzL4dOWxOmrICdFo9E2COVSqxGpMfp1oTIsfyTlBHom9ChN563WWmKPDXdigmH738pII5x1bMNjnOdnNdq7V/YWvGOajIMTNbCgd5MLxnend5UAWVh8WpNhImo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=HYTVlggZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="HYTVlggZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9819EC4CEC6; Tue, 10 Sep 2024 09:24:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1725960271; bh=5EGPy5Ci3JV09svqH6z3s9p0uaptNCwmvPw8SljHsN8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=HYTVlggZXxBETx1+KJUp6ZW4lQmnBYn54RyP4keM+jDaWELa3EpcwHcCu7BwnUjDM tHTLkbBhqZmxkmRGBCdqx+2jAVjaJn6yMvWSELUakkPefIudWFKyrK1VKUAJQdYa1y ADDZH+s1ZpPTgNIjkcbjVcrDScrGaNfWVqJfzbqdzM8Y5vTjLxdVr3jiTSONfk/gsw B7sH58zx2F0mRujjPTKuoBBLfZtpVcDbe16KcKCxHMeTe/7thl8j6hdtosfa1cWTIt toXPETJL3kmeAKn7EK3yN4U6H5N0/3xOjCLc6/wjEw/7E79X0P4mt2KuO3fKx7FHae g9uHwslUYNfAw== From: Roger Quadros Date: Tue, 10 Sep 2024 12:24:01 +0300 Subject: [PATCH net-next v4 4/6] net: ethernet: ti: cpsw_ale: add Policer and Thread control register fields Precedence: bulk X-Mailing-List: linux-omap@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240910-am65-cpsw-multi-rx-v4-4-077fa6403043@kernel.org> References: <20240910-am65-cpsw-multi-rx-v4-0-077fa6403043@kernel.org> In-Reply-To: <20240910-am65-cpsw-multi-rx-v4-0-077fa6403043@kernel.org> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Siddharth Vadapalli , Julien Panis , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Simon Horman , Andrew Lunn , Joe Damato , srk@ti.com, vigneshr@ti.com, danishanwar@ti.com, pekka Varis , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=6052; i=rogerq@kernel.org; h=from:subject:message-id; bh=5EGPy5Ci3JV09svqH6z3s9p0uaptNCwmvPw8SljHsN8=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBm4BA1Fhl/vOhzLfkjqknXapB1A3vfDn30qvgMI 8r1NioujliJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZuAQNQAKCRDSWmvTvnYw k/QAEACuZI/s7aPFga+cWiHB852Epq+F0US9rk+LKjFgr/3dp2SMu6ALuNq/uSkrxkIfMDiLA6I 16pMNg4MzYOYRBuOUrl9oTfBB72nSdH4f8uIPG4giT8Q44aLoac8p+6JosqaAls+IcRWNYE33uB 8idvvsPcJdO2DKxMGvGdRacY4fXX2ueXnKfXQ/1NdS5NHCyMOPNZOx/B1ZWJauN6GSg9TlIIVLo TpVU6jgU8EIs2OPyAkePGFmagg90rzDllHmA8RrtVoWiAEi52h5bUAhAdX9YXPD+dDpPNNcWFw1 a19OZW9j1lA2FT+Vze/CMGDO3pri41yf7NMvlpN5gdrEZIRrSydhLaEUO+DTqXiDtrrlL5sAFsY LaNImmd7z0waNFfrQBovYwgaVzthXqgxaO5xIdhfbL2tqYLOeNEZI0iymRNxzL8LAp53BpCRMgR OpNBm3hQHKK8k3fkFfA7sAgXoOYHuYJOJ+YimdSaZO4acqf3WebmyhanXihQLmRW1J+pJ6mcF6V jWTAXqnbTEYucdan5xuymA55McD9ZU2PTgFzBuxp9LtyUeqNuiuc+mn5vYr535XO8bBNT3HuU0q uRz62ZKRjXCofE1vAtA6rntqxXgFKCkhF/U6PeLPurjOJBWD1gPrSgzwnBWVRZTG9moOumt8sD6 TXbN/73omP92blg== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 Adds regfileds for Policer registers and Thread mapping/control registers. Signed-off-by: Roger Quadros Reviewed-by: Simon Horman --- Changelog: v4: - no change v3: - added Reviewed-by Simon Horman --- drivers/net/ethernet/ti/cpsw_ale.c | 86 ++++++++++++++++++++++++++++++++++++++ drivers/net/ethernet/ti/cpsw_ale.h | 41 ++++++++++++++++++ 2 files changed, 127 insertions(+) diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c index 9e45470b4eb9..aafe1210de52 100644 --- a/drivers/net/ethernet/ti/cpsw_ale.c +++ b/drivers/net/ethernet/ti/cpsw_ale.c @@ -46,6 +46,24 @@ #define ALE_UNKNOWNVLAN_FORCE_UNTAG_EGRESS 0x9C #define ALE_VLAN_MASK_MUX(reg) (0xc0 + (0x4 * (reg))) +#define ALE_POLICER_PORT_OUI 0x100 +#define ALE_POLICER_DA_SA 0x104 +#define ALE_POLICER_VLAN 0x108 +#define ALE_POLICER_ETHERTYPE_IPSA 0x10c +#define ALE_POLICER_IPDA 0x110 +#define ALE_POLICER_PIR 0x118 +#define ALE_POLICER_CIR 0x11c +#define ALE_POLICER_TBL_CTL 0x120 +#define ALE_POLICER_CTL 0x124 +#define ALE_POLICER_TEST_CTL 0x128 +#define ALE_POLICER_HIT_STATUS 0x12c +#define ALE_THREAD_DEF 0x134 +#define ALE_THREAD_CTL 0x138 +#define ALE_THREAD_VAL 0x13c + +#define ALE_POLICER_TBL_WRITE_ENABLE BIT(31) +#define ALE_POLICER_TBL_INDEX_MASK GENMASK(4, 0) + #define AM65_CPSW_ALE_THREAD_DEF_REG 0x134 /* ALE_AGING_TIMER */ @@ -1306,6 +1324,74 @@ static const struct reg_field ale_fields_cpsw_nu[] = { /* CPSW_ALE_STATUS_REG */ [ALE_ENTRIES] = REG_FIELD(ALE_STATUS, 0, 7), [ALE_POLICERS] = REG_FIELD(ALE_STATUS, 8, 15), + /* CPSW_ALE_POLICER_PORT_OUI_REG */ + [POL_PORT_MEN] = REG_FIELD(ALE_POLICER_PORT_OUI, 31, 31), + [POL_TRUNK_ID] = REG_FIELD(ALE_POLICER_PORT_OUI, 30, 30), + [POL_PORT_NUM] = REG_FIELD(ALE_POLICER_PORT_OUI, 25, 25), + [POL_PRI_MEN] = REG_FIELD(ALE_POLICER_PORT_OUI, 19, 19), + [POL_PRI_VAL] = REG_FIELD(ALE_POLICER_PORT_OUI, 16, 18), + [POL_OUI_MEN] = REG_FIELD(ALE_POLICER_PORT_OUI, 15, 15), + [POL_OUI_INDEX] = REG_FIELD(ALE_POLICER_PORT_OUI, 0, 5), + + /* CPSW_ALE_POLICER_DA_SA_REG */ + [POL_DST_MEN] = REG_FIELD(ALE_POLICER_DA_SA, 31, 31), + [POL_DST_INDEX] = REG_FIELD(ALE_POLICER_DA_SA, 16, 21), + [POL_SRC_MEN] = REG_FIELD(ALE_POLICER_DA_SA, 15, 15), + [POL_SRC_INDEX] = REG_FIELD(ALE_POLICER_DA_SA, 0, 5), + + /* CPSW_ALE_POLICER_VLAN_REG */ + [POL_OVLAN_MEN] = REG_FIELD(ALE_POLICER_VLAN, 31, 31), + [POL_OVLAN_INDEX] = REG_FIELD(ALE_POLICER_VLAN, 16, 21), + [POL_IVLAN_MEN] = REG_FIELD(ALE_POLICER_VLAN, 15, 15), + [POL_IVLAN_INDEX] = REG_FIELD(ALE_POLICER_VLAN, 0, 5), + + /* CPSW_ALE_POLICER_ETHERTYPE_IPSA_REG */ + [POL_ETHERTYPE_MEN] = REG_FIELD(ALE_POLICER_ETHERTYPE_IPSA, 31, 31), + [POL_ETHERTYPE_INDEX] = REG_FIELD(ALE_POLICER_ETHERTYPE_IPSA, 16, 21), + [POL_IPSRC_MEN] = REG_FIELD(ALE_POLICER_ETHERTYPE_IPSA, 15, 15), + [POL_IPSRC_INDEX] = REG_FIELD(ALE_POLICER_ETHERTYPE_IPSA, 0, 5), + + /* CPSW_ALE_POLICER_IPDA_REG */ + [POL_IPDST_MEN] = REG_FIELD(ALE_POLICER_IPDA, 31, 31), + [POL_IPDST_INDEX] = REG_FIELD(ALE_POLICER_IPDA, 16, 21), + + /* CPSW_ALE_POLICER_TBL_CTL_REG */ + /** + * REG_FIELDS not defined for this as fields cannot be correctly + * used independently + */ + + /* CPSW_ALE_POLICER_CTL_REG */ + [POL_EN] = REG_FIELD(ALE_POLICER_CTL, 31, 31), + [POL_RED_DROP_EN] = REG_FIELD(ALE_POLICER_CTL, 29, 29), + [POL_YELLOW_DROP_EN] = REG_FIELD(ALE_POLICER_CTL, 28, 28), + [POL_YELLOW_THRESH] = REG_FIELD(ALE_POLICER_CTL, 24, 26), + [POL_POL_MATCH_MODE] = REG_FIELD(ALE_POLICER_CTL, 22, 23), + [POL_PRIORITY_THREAD_EN] = REG_FIELD(ALE_POLICER_CTL, 21, 21), + [POL_MAC_ONLY_DEF_DIS] = REG_FIELD(ALE_POLICER_CTL, 20, 20), + + /* CPSW_ALE_POLICER_TEST_CTL_REG */ + [POL_TEST_CLR] = REG_FIELD(ALE_POLICER_TEST_CTL, 31, 31), + [POL_TEST_CLR_RED] = REG_FIELD(ALE_POLICER_TEST_CTL, 30, 30), + [POL_TEST_CLR_YELLOW] = REG_FIELD(ALE_POLICER_TEST_CTL, 29, 29), + [POL_TEST_CLR_SELECTED] = REG_FIELD(ALE_POLICER_TEST_CTL, 28, 28), + [POL_TEST_ENTRY] = REG_FIELD(ALE_POLICER_TEST_CTL, 0, 4), + + /* CPSW_ALE_POLICER_HIT_STATUS_REG */ + [POL_STATUS_HIT] = REG_FIELD(ALE_POLICER_HIT_STATUS, 31, 31), + [POL_STATUS_HIT_RED] = REG_FIELD(ALE_POLICER_HIT_STATUS, 30, 30), + [POL_STATUS_HIT_YELLOW] = REG_FIELD(ALE_POLICER_HIT_STATUS, 29, 29), + + /* CPSW_ALE_THREAD_DEF_REG */ + [ALE_DEFAULT_THREAD_EN] = REG_FIELD(ALE_THREAD_DEF, 15, 15), + [ALE_DEFAULT_THREAD_VAL] = REG_FIELD(ALE_THREAD_DEF, 0, 5), + + /* CPSW_ALE_THREAD_CTL_REG */ + [ALE_THREAD_CLASS_INDEX] = REG_FIELD(ALE_THREAD_CTL, 0, 4), + + /* CPSW_ALE_THREAD_VAL_REG */ + [ALE_THREAD_ENABLE] = REG_FIELD(ALE_THREAD_VAL, 15, 15), + [ALE_THREAD_VALUE] = REG_FIELD(ALE_THREAD_VAL, 0, 5), }; static const struct cpsw_ale_dev_id cpsw_ale_id_match[] = { diff --git a/drivers/net/ethernet/ti/cpsw_ale.h b/drivers/net/ethernet/ti/cpsw_ale.h index e12bb2caf016..2cb76acc6d16 100644 --- a/drivers/net/ethernet/ti/cpsw_ale.h +++ b/drivers/net/ethernet/ti/cpsw_ale.h @@ -36,6 +36,47 @@ enum ale_fields { MAJOR_VER, ALE_ENTRIES, ALE_POLICERS, + POL_PORT_MEN, + POL_TRUNK_ID, + POL_PORT_NUM, + POL_PRI_MEN, + POL_PRI_VAL, + POL_OUI_MEN, + POL_OUI_INDEX, + POL_DST_MEN, + POL_DST_INDEX, + POL_SRC_MEN, + POL_SRC_INDEX, + POL_OVLAN_MEN, + POL_OVLAN_INDEX, + POL_IVLAN_MEN, + POL_IVLAN_INDEX, + POL_ETHERTYPE_MEN, + POL_ETHERTYPE_INDEX, + POL_IPSRC_MEN, + POL_IPSRC_INDEX, + POL_IPDST_MEN, + POL_IPDST_INDEX, + POL_EN, + POL_RED_DROP_EN, + POL_YELLOW_DROP_EN, + POL_YELLOW_THRESH, + POL_POL_MATCH_MODE, + POL_PRIORITY_THREAD_EN, + POL_MAC_ONLY_DEF_DIS, + POL_TEST_CLR, + POL_TEST_CLR_RED, + POL_TEST_CLR_YELLOW, + POL_TEST_CLR_SELECTED, + POL_TEST_ENTRY, + POL_STATUS_HIT, + POL_STATUS_HIT_RED, + POL_STATUS_HIT_YELLOW, + ALE_DEFAULT_THREAD_EN, + ALE_DEFAULT_THREAD_VAL, + ALE_THREAD_CLASS_INDEX, + ALE_THREAD_ENABLE, + ALE_THREAD_VALUE, /* terminator */ ALE_FIELDS_MAX, }; From patchwork Tue Sep 10 09:24:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 827263 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CBDFD1891A1; Tue, 10 Sep 2024 09:24:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725960276; cv=none; b=RmyYEOxF285hGNGuQl/i6II4LlGBmr+pTFB2pXbp87IYI8rX9Y0oZskEhwhkmWYNBlH+PT4q3AMKZi5OYC8I4aHtj800votis49SVzZc6Bm//iegxxXHw/MePa6uML6G/7ZpzlbDLfG4plzGSN1/U+Co+vQ3Wpzz14NKDXRPne4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725960276; c=relaxed/simple; bh=Qd0j7HD4DsmaZ5Rx4eSfD1A4i/7ZLHtFqt+DVyDB9E4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=K5ypMbPYNG/AcEZ/yjAz4l7hNcQahJ18onzGHm9BgXwxb/gO5QDtVx0HhDT6goE37Y165d3ooNv+pgwWiNTuFOB1lVwdKUdadfiaVBvugGL2IoX+EV0SQZ/siuCndVdlYZRzPekwFwnglnxL7aGhFkXmF5XbJcZyWuY1UH6ZL4k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=h9Xm7KY2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="h9Xm7KY2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BC38AC4CEC6; Tue, 10 Sep 2024 09:24:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1725960276; bh=Qd0j7HD4DsmaZ5Rx4eSfD1A4i/7ZLHtFqt+DVyDB9E4=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=h9Xm7KY2LeC0G3Ro51ESXTcIn03hjj3rmvEAjYGOWDzbF4BGONEUWD2QtSQkfHw7f +35WRy8TtbVHa4mGGJl8UcHNwIsHdD1568YxZgdii24mgeuXSelrdTpcnMRfZi8aNX SVYTwUKzpx0HLsv4qPrGp6LhA2MOMGrkMFpSLf60/LOGERx1WBNUspKAMKC0C0+xfe EHxC3t1fYLaODbexPVVAW89O94daOU9nG/TYhq6PLCuqh0ERVI/uZeWppAKKivwoGP 2fVD1Q+fS4AFJIW63HUiBZf2X4Lvjz9mJwy8CcuqIQRkEYJfa1pjGbeJcA/NU5gZ4I FF1dFcEwq85Jg== From: Roger Quadros Date: Tue, 10 Sep 2024 12:24:02 +0300 Subject: [PATCH net-next v4 5/6] net: ethernet: ti: cpsw_ale: add policer/classifier helpers and setup defaults Precedence: bulk X-Mailing-List: linux-omap@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240910-am65-cpsw-multi-rx-v4-5-077fa6403043@kernel.org> References: <20240910-am65-cpsw-multi-rx-v4-0-077fa6403043@kernel.org> In-Reply-To: <20240910-am65-cpsw-multi-rx-v4-0-077fa6403043@kernel.org> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Siddharth Vadapalli , Julien Panis , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Simon Horman , Andrew Lunn , Joe Damato , srk@ti.com, vigneshr@ti.com, danishanwar@ti.com, pekka Varis , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=6095; i=rogerq@kernel.org; h=from:subject:message-id; bh=Qd0j7HD4DsmaZ5Rx4eSfD1A4i/7ZLHtFqt+DVyDB9E4=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBm4BA19Dv6aUGGhiRP/9HmBWT31pBFuWkt766K4 iEGBPK/B2OJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZuAQNQAKCRDSWmvTvnYw k8WxD/0TYNHXhre4sESpo35jaaE1UYSad+WWq9bcfoWBsa2moRkS6xNjwNAgj863BqRK0CkIe9j S/u4M7Kxia5OStjHBBESqYDeW5Kh0McoJ2yM+IwlbB07VmbSYA2VK8Y+Fav0/Lv7wOAS4JPKJmP tud0HFC0kjwrP6x1uTjjV6bjQqqfxkjfGJWO4Pg+NiWcpJM2AyUfa4zs2jhRUZQNdWUwik9IwTA 7cVvvLPejGxyXbPNDkG8NAI1IQjwKpyI+dTqyQGafp1jObWCHBxGwsXZmXB3PuuJw3HfZIKnQgL y3Ce1cqghs1oj+3cO68eH86FcGUHC0GUQ5DU/tbr7LBqUa2NdGvUHQtOmEB9wn99KVK0+xK+iF2 J0QBSZwY8FKVgHUEdPCLvIZz8UOrJPN146+CaFhQpelYd7TDKd7BK9JC/vs0sCOLqvtMgKvMdmF +EOA/0Yrc0WLasQvwzyI54io2m2VvZPtQMH7IA9RtWL9kGfGVZlnyhk+xYaLzmXOhznijaS2np2 F5E7JsKqA9uBT/rs7bBPIZyaK/+bOvYqLH0fSic8wugyve9yPktAQ19yDRZcmVxUbx5T6mjvTjF WtSInZSPHFnx30EE1k0UHhLppkCUtTf0wKTQZY+5ed1kxEb8C3v5KN7YHf8DeCF6EmTPoFJqGn/ J2BESV/EwYzBZHg== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 The Policer registers in the ALE register space are just shadow registers and use an index field in the policer table control register to read/write to the actual Polier registers. Add helper functions to Read and Write to Policer registers. Also add a helper function to set the thread value to classifier/policer mapping. Any packet that first matches the classifier will be sent to the thread (flow) that is set in the classifier to thread mapping table. If not set then it goes to the default flow. Default behaviour is to have 8 classifiers to map 8 DSCP/PCP priorities to N receive threads (flows). N depends on number of RX channels enabled for the port. As per the standard [1] User prioritie 1 (Background) and 2 (Spare) have lower priority than the user priority 0 (default). User priority 1 being of the lowest priority. [1] IEEE802.1D-2004, IEEE Standard for Local and metropolitan area networks Table G-2 - Traffic type acronyms Table G-3 - Defining traffic types Signed-off-by: Roger Quadros Reviewed-by: Simon Horman --- Changelog: v4: - no change v3: - squashed 2 patches into 1 - added comment explaining the default thread to priority mapping table - typo fix "classifer"->"classifier" --- drivers/net/ethernet/ti/cpsw_ale.c | 94 ++++++++++++++++++++++++++++++++++++++ drivers/net/ethernet/ti/cpsw_ale.h | 1 + 2 files changed, 95 insertions(+) diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c index aafe1210de52..0d5d8917c70b 100644 --- a/drivers/net/ethernet/ti/cpsw_ale.c +++ b/drivers/net/ethernet/ti/cpsw_ale.c @@ -1627,3 +1627,97 @@ u32 cpsw_ale_get_num_entries(struct cpsw_ale *ale) { return ale ? ale->params.ale_entries : 0; } + +/* Reads the specified policer index into ALE POLICER registers */ +static void cpsw_ale_policer_read_idx(struct cpsw_ale *ale, u32 idx) +{ + idx &= ALE_POLICER_TBL_INDEX_MASK; + writel_relaxed(idx, ale->params.ale_regs + ALE_POLICER_TBL_CTL); +} + +/* Writes the ALE POLICER registers into the specified policer index */ +static void cpsw_ale_policer_write_idx(struct cpsw_ale *ale, u32 idx) +{ + idx &= ALE_POLICER_TBL_INDEX_MASK; + idx |= ALE_POLICER_TBL_WRITE_ENABLE; + writel_relaxed(idx, ale->params.ale_regs + ALE_POLICER_TBL_CTL); +} + +/* enables/disables the custom thread value for the specified policer index */ +static void cpsw_ale_policer_thread_idx_enable(struct cpsw_ale *ale, u32 idx, + u32 thread_id, bool enable) +{ + regmap_field_write(ale->fields[ALE_THREAD_CLASS_INDEX], idx); + regmap_field_write(ale->fields[ALE_THREAD_VALUE], thread_id); + regmap_field_write(ale->fields[ALE_THREAD_ENABLE], enable ? 1 : 0); +} + +/* Disable all policer entries and thread mappings */ +static void cpsw_ale_policer_reset(struct cpsw_ale *ale) +{ + int i; + + for (i = 0; i < ale->params.num_policers ; i++) { + cpsw_ale_policer_read_idx(ale, i); + regmap_field_write(ale->fields[POL_PORT_MEN], 0); + regmap_field_write(ale->fields[POL_PRI_MEN], 0); + regmap_field_write(ale->fields[POL_OUI_MEN], 0); + regmap_field_write(ale->fields[POL_DST_MEN], 0); + regmap_field_write(ale->fields[POL_SRC_MEN], 0); + regmap_field_write(ale->fields[POL_OVLAN_MEN], 0); + regmap_field_write(ale->fields[POL_IVLAN_MEN], 0); + regmap_field_write(ale->fields[POL_ETHERTYPE_MEN], 0); + regmap_field_write(ale->fields[POL_IPSRC_MEN], 0); + regmap_field_write(ale->fields[POL_IPDST_MEN], 0); + regmap_field_write(ale->fields[POL_EN], 0); + regmap_field_write(ale->fields[POL_RED_DROP_EN], 0); + regmap_field_write(ale->fields[POL_YELLOW_DROP_EN], 0); + regmap_field_write(ale->fields[POL_PRIORITY_THREAD_EN], 0); + + cpsw_ale_policer_thread_idx_enable(ale, i, 0, 0); + } +} + +/* Default classifier is to map 8 user priorities to N receive channels */ +void cpsw_ale_classifier_setup_default(struct cpsw_ale *ale, int num_rx_ch) +{ + int pri, idx; + /* IEEE802.1D-2004, Standard for Local and metropolitan area networks + * Table G-2 - Traffic type acronyms + * Table G-3 - Defining traffic types + * User priority values 1 and 2 effectively communicate a lower + * priority than 0. In the below table 0 is assigned to higher priority + * thread than 1 and 2 wherever possible. + * The below table maps which thread the user priority needs to be + * sent to for a given number of threads (RX channels). Upper threads + * have higher priority. + * e.g. if number of threads is 8 then user priority 0 will map to + * pri_thread_map[8-1][0] i.e. thread 2 + */ + int pri_thread_map[8][8] = { { 0, 0, 0, 0, 0, 0, 0, 0, }, + { 0, 0, 0, 0, 1, 1, 1, 1, }, + { 0, 0, 0, 0, 1, 1, 2, 2, }, + { 1, 0, 0, 1, 2, 2, 3, 3, }, + { 1, 0, 0, 1, 2, 3, 4, 4, }, + { 1, 0, 0, 2, 3, 4, 5, 5, }, + { 1, 0, 0, 2, 3, 4, 5, 6, }, + { 2, 0, 1, 3, 4, 5, 6, 7, } }; + + cpsw_ale_policer_reset(ale); + + /* use first 8 classifiers to map 8 (DSCP/PCP) priorities to channels */ + for (pri = 0; pri < 8; pri++) { + idx = pri; + + /* Classifier 'idx' match on priority 'pri' */ + cpsw_ale_policer_read_idx(ale, idx); + regmap_field_write(ale->fields[POL_PRI_VAL], pri); + regmap_field_write(ale->fields[POL_PRI_MEN], 1); + cpsw_ale_policer_write_idx(ale, idx); + + /* Map Classifier 'idx' to thread provided by the map */ + cpsw_ale_policer_thread_idx_enable(ale, idx, + pri_thread_map[num_rx_ch - 1][pri], + 1); + } +} diff --git a/drivers/net/ethernet/ti/cpsw_ale.h b/drivers/net/ethernet/ti/cpsw_ale.h index 2cb76acc6d16..1e4e9a3dd234 100644 --- a/drivers/net/ethernet/ti/cpsw_ale.h +++ b/drivers/net/ethernet/ti/cpsw_ale.h @@ -193,5 +193,6 @@ int cpsw_ale_vlan_add_modify(struct cpsw_ale *ale, u16 vid, int port_mask, int cpsw_ale_vlan_del_modify(struct cpsw_ale *ale, u16 vid, int port_mask); void cpsw_ale_set_unreg_mcast(struct cpsw_ale *ale, int unreg_mcast_mask, bool add); +void cpsw_ale_classifier_setup_default(struct cpsw_ale *ale, int num_rx_ch); #endif From patchwork Tue Sep 10 09:24:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Quadros X-Patchwork-Id: 827602 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4385188A09; Tue, 10 Sep 2024 09:24:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725960281; cv=none; b=sdryvADuRTgquiGCJGwYG2ACZJXZIDRxUSZjelKyKSi7dsaXHgT7DYz+YsgO/0Yc1E3f0hUSoTCFGSzGYQTfegtGa1M2IBLt5D21Zlw/4tbw/8B59QWPdwFhKia/dpR8XMATEhRf+2Ps5vFvQQ79Hjpi1D849xYg1ARooEqwKfA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725960281; c=relaxed/simple; bh=XkhzSnvw+5dFkNxwBsvjI0GFrEiYS/TMYTriwXBUSCs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=MVUaahOCnk5rjZt+/L2InQh0mWnqDoelbtOokJQTwg7xnmr3+LP3l6iSF0PnZvWsPgG7BzE5cHD+0lAATf0TkXKc6dHz6wl/p2sJkUVhEuVeKy+1I2CM2j2cNakWbacEjVq0jRRs7R9GlMUUvjenWXAeplsjdp19R1mXu2yk9EY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=AkzD7+Gm; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="AkzD7+Gm" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DFC37C4CECE; Tue, 10 Sep 2024 09:24:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1725960281; bh=XkhzSnvw+5dFkNxwBsvjI0GFrEiYS/TMYTriwXBUSCs=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=AkzD7+Gm2B6J5qwAVqJf5gcYNcMTZYuBjRo8G1W1BM43z9gfU6w6lrTBkRcvVyqxI 5B1d0yJ9A4Czyp5VCS2k0Z0R/skxEZI5cj115tKQi7kazhSLzPtQLTO/vryaPl2mBg NdoQqWKEtdIK2vom3jfUqMw0nr8VlCu+NeOLyPMS2GwoLeh5N3ARxk1gGVuYS/7QDY i+FHilALLUC7U4R3EFhXpuOJiJHyO4s7DgqnVgNuEUAfrCriJCWelSNtQGZO2ibuhk OyDfQQFKTaiG5JHkBiqG/R8m5a1mXnTx3THbiST+kzZ4mVVCPBQZXX/NjlB4+U0+Gv 9hYaLnA71WIYQ== From: Roger Quadros Date: Tue, 10 Sep 2024 12:24:03 +0300 Subject: [PATCH net-next v4 6/6] net: ethernet: ti: am65-cpsw: setup priority to flow mapping Precedence: bulk X-Mailing-List: linux-omap@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240910-am65-cpsw-multi-rx-v4-6-077fa6403043@kernel.org> References: <20240910-am65-cpsw-multi-rx-v4-0-077fa6403043@kernel.org> In-Reply-To: <20240910-am65-cpsw-multi-rx-v4-0-077fa6403043@kernel.org> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Siddharth Vadapalli , Julien Panis , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Simon Horman , Andrew Lunn , Joe Damato , srk@ti.com, vigneshr@ti.com, danishanwar@ti.com, pekka Varis , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, bpf@vger.kernel.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=1093; i=rogerq@kernel.org; h=from:subject:message-id; bh=XkhzSnvw+5dFkNxwBsvjI0GFrEiYS/TMYTriwXBUSCs=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBm4BA1N2SGIUlwRTcoPwYtwPHf2/tDs/B+nkSnf AQAY/VesceJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCZuAQNQAKCRDSWmvTvnYw k2GlD/0YzBOgrE5fWuCktTv3MeJymXE1u9zY86fsC7rkXokxs63yMs2CbbxEFpC1b3f1NQHdM5s xzT2e0keX2H4cTFrTuzbJ2GDvv1+aMqL9UuvgXeUkU4wfliYUATWZvolm/u7jLKhu7BGISA9le5 VGmSkyA94GlGxTi36SraoUEVIymCgDiZB7Rhyxoagf0edrHlP+ECMGN6aXFrUrxea6LFZ6g0g4K lToZ9GOG7O5OxlbPqpMUKvVu9ORRrpO3iJvUqYkU1lc66pVURBJEw4U1E0QEbG8wf9fLKjf7wXn bU5H8Yg66JjDAwd9JdWWyuQ7eXhF2DrqtYAC0Dt82fHuSG7BGrl317QdmjHa8YcTV3g4MnIx7xy V9kUf8tCx6e9ChgYNtyuR7GFEZVlH9q4tyFVHVN5L+d5T7KjisNtwf0Za+zqAc94K5EiRaRXW7I InlXiDE/gsUFwqrZ6gIs829YhyBm5YM9BTlQ+BvXIna+xydvbUfpu/mN3oUWl3Jiq/9s6QPKR8I rTskpnQG+P3oy0/xlgGGDClRpenrc9fvDvO2vWkh1HLjGK1pFW3KPeQRxBcxZA7AMStNXheVvs0 cxcKCgxBwuFq3/8G9lhbYQ2GvOZuYnmqEghy15+0MOdgQlS/5F6+49Kyo/q57CDdCSPLvyVEqPE uvp+NKU9Ni4BsRg== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 Now that we support multiple RX queues, enable default priority to flow mapping so that higher priority packets come on higher channels (flows). The Classifier checks for PCP/DSCP priority in the packet and routes them to the appropriate flow. Signed-off-by: Roger Quadros Reviewed-by: Simon Horman --- Changelog: v4: - no change v3: - added Reviewed-by Simon Horman --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index 76e62351b30b..cbe99017cbfa 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -2500,6 +2500,9 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common) } } + /* setup classifier to route priorities to flows */ + cpsw_ale_classifier_setup_default(common->ale, common->rx_ch_num_flows); + err: i = devm_add_action(dev, am65_cpsw_nuss_free_rx_chns, common); if (i) {