From patchwork Mon Mar 29 13:40:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ong Boon Leong X-Patchwork-Id: 411198 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 126C1C433C1 for ; Mon, 29 Mar 2021 13:37:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D0F6C61982 for ; Mon, 29 Mar 2021 13:37:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231784AbhC2NhB (ORCPT ); Mon, 29 Mar 2021 09:37:01 -0400 Received: from mga03.intel.com ([134.134.136.65]:45673 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231600AbhC2Ngh (ORCPT ); Mon, 29 Mar 2021 09:36:37 -0400 IronPort-SDR: C0JWxuorHCngo0eoaHQlLJbbMqMcRzbp2sP6FZkHpxg3mj8rDNy+JLFTKp/qH1VoohuC0SaNp7 tPX1NO5OTtkg== X-IronPort-AV: E=McAfee;i="6000,8403,9938"; a="191578705" X-IronPort-AV: E=Sophos;i="5.81,287,1610438400"; d="scan'208";a="191578705" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2021 06:36:36 -0700 IronPort-SDR: UhLrDtnYdY0u05F4GZB26iPn0U8jnOc3pmel3/2NmZwvKkWNgh4qPniH4tO9B3Z6MLANLiPcMT p59xDNjDIe1g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,287,1610438400"; d="scan'208";a="411079379" Received: from glass.png.intel.com ([10.158.65.59]) by fmsmga008.fm.intel.com with ESMTP; 29 Mar 2021 06:36:32 -0700 From: Ong Boon Leong To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S . Miller" , Jakub Kicinski , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Maxime Coquelin , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Ong Boon Leong Subject: [PATCH net-next 3/6] net: stmmac: arrange Tx tail pointer update to stmmac_flush_tx_descriptors Date: Mon, 29 Mar 2021 21:40:10 +0800 Message-Id: <20210329134013.9516-4-boon.leong.ong@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210329134013.9516-1-boon.leong.ong@intel.com> References: <20210329134013.9516-1-boon.leong.ong@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch organizes TX tail pointer update into a new function called stmmac_flush_tx_descriptors() so that we can reuse it in stmmac_xmit(), stmmac_tso_xmit() and up-coming XDP implementation. Signed-off-by: Ong Boon Leong --- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 52 +++++++++---------- 1 file changed, 24 insertions(+), 28 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index ace3c3835a9f..18578239b438 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -3507,6 +3507,28 @@ static void stmmac_tso_allocator(struct stmmac_priv *priv, dma_addr_t des, } } +static void stmmac_flush_tx_descriptors(struct stmmac_priv *priv, int queue) +{ + struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + int desc_size; + + if (likely(priv->extend_desc)) + desc_size = sizeof(struct dma_extended_desc); + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + desc_size = sizeof(struct dma_edesc); + else + desc_size = sizeof(struct dma_desc); + + /* The own bit must be the latest setting done when prepare the + * descriptor and then barrier is needed to make sure that + * all is coherent before granting the DMA engine. + */ + wmb(); + + tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * desc_size); + stmmac_set_tx_tail_ptr(priv, priv->ioaddr, tx_q->tx_tail_addr, queue); +} + /** * stmmac_tso_xmit - Tx entry point of the driver for oversized frames (TSO) * @skb : the socket buffer @@ -3739,12 +3761,6 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) stmmac_set_tx_owner(priv, mss_desc); } - /* The own bit must be the latest setting done when prepare the - * descriptor and then barrier is needed to make sure that - * all is coherent before granting the DMA engine. - */ - wmb(); - if (netif_msg_pktdata(priv)) { pr_info("%s: curr=%d dirty=%d f=%d, e=%d, f_p=%p, nfrags %d\n", __func__, tx_q->cur_tx, tx_q->dirty_tx, first_entry, @@ -3755,13 +3771,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len); - if (tx_q->tbs & STMMAC_TBS_AVAIL) - desc_size = sizeof(struct dma_edesc); - else - desc_size = sizeof(struct dma_desc); - - tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * desc_size); - stmmac_set_tx_tail_ptr(priv, priv->ioaddr, tx_q->tx_tail_addr, queue); + stmmac_flush_tx_descriptors(priv, queue); stmmac_tx_timer_arm(priv, queue); return NETDEV_TX_OK; @@ -3996,25 +4006,11 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) stmmac_set_tx_owner(priv, first); - /* The own bit must be the latest setting done when prepare the - * descriptor and then barrier is needed to make sure that - * all is coherent before granting the DMA engine. - */ - wmb(); - netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len); stmmac_enable_dma_transmission(priv, priv->ioaddr); - if (likely(priv->extend_desc)) - desc_size = sizeof(struct dma_extended_desc); - else if (tx_q->tbs & STMMAC_TBS_AVAIL) - desc_size = sizeof(struct dma_edesc); - else - desc_size = sizeof(struct dma_desc); - - tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * desc_size); - stmmac_set_tx_tail_ptr(priv, priv->ioaddr, tx_q->tx_tail_addr, queue); + stmmac_flush_tx_descriptors(priv, queue); stmmac_tx_timer_arm(priv, queue); return NETDEV_TX_OK; From patchwork Mon Mar 29 13:40:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ong Boon Leong X-Patchwork-Id: 411196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8040C433F2 for ; Mon, 29 Mar 2021 13:37:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 985BD619B5 for ; Mon, 29 Mar 2021 13:37:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231819AbhC2NhC (ORCPT ); Mon, 29 Mar 2021 09:37:02 -0400 Received: from mga03.intel.com ([134.134.136.65]:45673 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231621AbhC2Ngm (ORCPT ); Mon, 29 Mar 2021 09:36:42 -0400 IronPort-SDR: /Epi+0OVctmhVouzeZ86A2Mbf4J+lH90/7eafMwGC+RzJZiOxLW4jk/+VD66r0/7s3uvgOe0Bl I6qovWkrcXEQ== X-IronPort-AV: E=McAfee;i="6000,8403,9938"; a="191578717" X-IronPort-AV: E=Sophos;i="5.81,287,1610438400"; d="scan'208";a="191578717" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2021 06:36:41 -0700 IronPort-SDR: iUxqONH9MVEcDB9H1Ak83aanIQ5vOKZ4cMI+Dxr5KQv09NreZWl66wlcoBGrkF3CcuC38Z10rs j+2A3sefgJCw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,287,1610438400"; d="scan'208";a="411079403" Received: from glass.png.intel.com ([10.158.65.59]) by fmsmga008.fm.intel.com with ESMTP; 29 Mar 2021 06:36:37 -0700 From: Ong Boon Leong To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S . Miller" , Jakub Kicinski , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Maxime Coquelin , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Ong Boon Leong Subject: [PATCH net-next 4/6] net: stmmac: Add initial XDP support Date: Mon, 29 Mar 2021 21:40:11 +0800 Message-Id: <20210329134013.9516-5-boon.leong.ong@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210329134013.9516-1-boon.leong.ong@intel.com> References: <20210329134013.9516-1-boon.leong.ong@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch adds the initial XDP support to stmmac driver. It supports XDP_PASS, XDP_DROP and XDP_ABORTED actions. Upcoming patches will add support for XDP_TX and XDP_REDIRECT. To support XDP headroom, this patch adds page_offset into RX buffer and change the dma_sync_single_for_device|cpu(). The DMA address used for RX operation are changed to take into page_offset too. As page_pool can handle dma_sync_single_for_device() on behalf of driver with PP_FLAG_DMA_SYNC_DEV flag, we skip doing that in stmmac driver. Current stmmac driver supports split header support (SPH) in RX but the flexibility of splitting header and payload at different position makes it very complex to be supported for XDP processing. In addition, jumbo frame is not supported in XDP to keep the initial codes simple. This patch has been tested with the sample app "xdp1" located in samples/bpf directory for both SKB and Native (XDP) mode. The burst traffic generated using pktgen_sample03_burst_single_flow.sh in samples/pktgen directory. Signed-off-by: Ong Boon Leong --- drivers/net/ethernet/stmicro/stmmac/Makefile | 1 + drivers/net/ethernet/stmicro/stmmac/stmmac.h | 21 ++- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 146 +++++++++++++++--- .../net/ethernet/stmicro/stmmac/stmmac_xdp.c | 40 +++++ .../net/ethernet/stmicro/stmmac/stmmac_xdp.h | 12 ++ 5 files changed, 195 insertions(+), 25 deletions(-) create mode 100644 drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c create mode 100644 drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.h diff --git a/drivers/net/ethernet/stmicro/stmmac/Makefile b/drivers/net/ethernet/stmicro/stmmac/Makefile index 366740ab9c5a..f2e478b884b0 100644 --- a/drivers/net/ethernet/stmicro/stmmac/Makefile +++ b/drivers/net/ethernet/stmicro/stmmac/Makefile @@ -6,6 +6,7 @@ stmmac-objs:= stmmac_main.o stmmac_ethtool.o stmmac_mdio.o ring_mode.o \ mmc_core.o stmmac_hwtstamp.o stmmac_ptp.o dwmac4_descs.o \ dwmac4_dma.o dwmac4_lib.o dwmac4_core.o dwmac5.o hwif.o \ stmmac_tc.o dwxgmac2_core.o dwxgmac2_dma.o dwxgmac2_descs.o \ + stmmac_xdp.o \ $(stmmac-y) stmmac-$(CONFIG_STMMAC_SELFTESTS) += stmmac_selftests.o diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h index e293423f98c3..e72224c8fbac 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h @@ -68,8 +68,9 @@ struct stmmac_tx_queue { struct stmmac_rx_buffer { struct page *page; - struct page *sec_page; dma_addr_t addr; + __u32 page_offset; + struct page *sec_page; dma_addr_t sec_addr; }; @@ -269,6 +270,9 @@ struct stmmac_priv { /* Receive Side Scaling */ struct stmmac_rss rss; + + /* XDP BPF Program */ + struct bpf_prog *xdp_prog; }; enum stmmac_state { @@ -285,6 +289,8 @@ void stmmac_set_ethtool_ops(struct net_device *netdev); void stmmac_ptp_register(struct stmmac_priv *priv); void stmmac_ptp_unregister(struct stmmac_priv *priv); +int stmmac_open(struct net_device *dev); +int stmmac_release(struct net_device *dev); int stmmac_resume(struct device *dev); int stmmac_suspend(struct device *dev); int stmmac_dvr_remove(struct device *dev); @@ -298,6 +304,19 @@ int stmmac_reinit_ringparam(struct net_device *dev, u32 rx_size, u32 tx_size); int stmmac_bus_clks_config(struct stmmac_priv *priv, bool enabled); void stmmac_fpe_handshake(struct stmmac_priv *priv, bool enable); +static inline bool stmmac_xdp_is_enabled(struct stmmac_priv *priv) +{ + return !!priv->xdp_prog; +} + +static inline unsigned int stmmac_rx_offset(struct stmmac_priv *priv) +{ + if (stmmac_xdp_is_enabled(priv)) + return XDP_PACKET_HEADROOM; + + return 0; +} + #if IS_ENABLED(CONFIG_STMMAC_SELFTESTS) void stmmac_selftest_run(struct net_device *dev, struct ethtool_test *etest, u64 *buf); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 18578239b438..fd29c36860c9 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -38,9 +38,11 @@ #include #include #include +#include #include #include "stmmac_ptp.h" #include "stmmac.h" +#include "stmmac_xdp.h" #include #include #include "dwmac1000.h" @@ -67,6 +69,9 @@ MODULE_PARM_DESC(phyaddr, "Physical device address"); #define STMMAC_TX_THRESH(x) ((x)->dma_tx_size / 4) #define STMMAC_RX_THRESH(x) ((x)->dma_rx_size / 4) +#define STMMAC_XDP_PASS 0 +#define STMMAC_XDP_CONSUMED BIT(0) + static int flow_ctrl = FLOW_AUTO; module_param(flow_ctrl, int, 0644); MODULE_PARM_DESC(flow_ctrl, "Flow control ability [on/off]"); @@ -1384,6 +1389,7 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p, buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); if (!buf->page) return -ENOMEM; + buf->page_offset = stmmac_rx_offset(priv); if (priv->sph) { buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool); @@ -1397,7 +1403,8 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p, stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false); } - buf->addr = page_pool_get_dma_addr(buf->page); + buf->addr = page_pool_get_dma_addr(buf->page) + buf->page_offset; + stmmac_set_desc_addr(priv, p, buf->addr); if (priv->dma_buf_sz == BUF_SIZE_16KiB) stmmac_init_desc3(priv, p); @@ -1503,7 +1510,8 @@ static void stmmac_reinit_rx_buffers(struct stmmac_priv *priv) if (!buf->page) goto err_reinit_rx_buffers; - buf->addr = page_pool_get_dma_addr(buf->page); + buf->addr = page_pool_get_dma_addr(buf->page) + + buf->page_offset; } if (priv->sph && !buf->sec_page) { @@ -1821,6 +1829,7 @@ static void free_dma_tx_desc_resources(struct stmmac_priv *priv) */ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv) { + bool xdp_prog = stmmac_xdp_is_enabled(priv); u32 rx_count = priv->plat->rx_queues_to_use; int ret = -ENOMEM; u32 queue; @@ -1834,13 +1843,15 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv) rx_q->queue_index = queue; rx_q->priv_data = priv; - pp_params.flags = PP_FLAG_DMA_MAP; + pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; pp_params.pool_size = priv->dma_rx_size; num_pages = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE); pp_params.order = ilog2(num_pages); pp_params.nid = dev_to_node(priv->device); pp_params.dev = priv->device; - pp_params.dma_dir = DMA_FROM_DEVICE; + pp_params.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; + pp_params.offset = stmmac_rx_offset(priv); + pp_params.max_len = STMMAC_MAX_RX_BUF_SIZE(num_pages); rx_q->page_pool = page_pool_create(&pp_params); if (IS_ERR(rx_q->page_pool)) { @@ -3257,7 +3268,7 @@ static int stmmac_request_irq(struct net_device *dev) * 0 on success and an appropriate (-)ve integer as defined in errno.h * file on failure. */ -static int stmmac_open(struct net_device *dev) +int stmmac_open(struct net_device *dev) { struct stmmac_priv *priv = netdev_priv(dev); int bfsize = 0; @@ -3380,7 +3391,7 @@ static void stmmac_fpe_stop_wq(struct stmmac_priv *priv) * Description: * This is the stop entry point of the driver. */ -static int stmmac_release(struct net_device *dev) +int stmmac_release(struct net_device *dev) { struct stmmac_priv *priv = netdev_priv(dev); u32 chan; @@ -3560,10 +3571,10 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) { struct dma_desc *desc, *first, *mss_desc = NULL; struct stmmac_priv *priv = netdev_priv(dev); - int desc_size, tmp_pay_len = 0, first_tx; int nfrags = skb_shinfo(skb)->nr_frags; u32 queue = skb_get_queue_mapping(skb); unsigned int first_entry, tx_packets; + int tmp_pay_len = 0, first_tx; struct stmmac_tx_queue *tx_q; bool has_vlan, set_ic; u8 proto_hdr_len, hdr; @@ -3801,10 +3812,10 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) int nfrags = skb_shinfo(skb)->nr_frags; int gso = skb_shinfo(skb)->gso_type; struct dma_edesc *tbs_desc = NULL; - int entry, desc_size, first_tx; struct dma_desc *desc, *first; struct stmmac_tx_queue *tx_q; bool has_vlan, set_ic; + int entry, first_tx; dma_addr_t des; tx_q = &priv->tx_queue[queue]; @@ -4080,18 +4091,9 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue) break; buf->sec_addr = page_pool_get_dma_addr(buf->sec_page); - - dma_sync_single_for_device(priv->device, buf->sec_addr, - len, DMA_FROM_DEVICE); } - buf->addr = page_pool_get_dma_addr(buf->page); - - /* Sync whole allocation to device. This will invalidate old - * data. - */ - dma_sync_single_for_device(priv->device, buf->addr, len, - DMA_FROM_DEVICE); + buf->addr = page_pool_get_dma_addr(buf->page) + buf->page_offset; stmmac_set_desc_addr(priv, p, buf->addr); if (priv->sph) @@ -4170,6 +4172,42 @@ static unsigned int stmmac_rx_buf2_len(struct stmmac_priv *priv, return plen - len; } +static struct sk_buff *stmmac_xdp_run_prog(struct stmmac_priv *priv, + struct xdp_buff *xdp) +{ + struct bpf_prog *prog; + int res; + u32 act; + + rcu_read_lock(); + + prog = READ_ONCE(priv->xdp_prog); + if (!prog) { + res = STMMAC_XDP_PASS; + goto unlock; + } + + act = bpf_prog_run_xdp(prog, xdp); + switch (act) { + case XDP_PASS: + res = STMMAC_XDP_PASS; + break; + default: + bpf_warn_invalid_xdp_action(act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(priv->dev, prog, act); + fallthrough; + case XDP_DROP: + res = STMMAC_XDP_CONSUMED; + break; + } + +unlock: + rcu_read_unlock(); + return ERR_PTR(-res); +} + /** * stmmac_rx - manage the receive process * @priv: driver private structure @@ -4185,8 +4223,14 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) unsigned int count = 0, error = 0, len = 0; int status = 0, coe = priv->hw->rx_csum; unsigned int next_entry = rx_q->cur_rx; + enum dma_data_direction dma_dir; unsigned int desc_size; struct sk_buff *skb = NULL; + struct xdp_buff xdp; + int buf_sz; + + dma_dir = page_pool_get_dma_dir(rx_q->page_pool); + buf_sz = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE) * PAGE_SIZE; if (netif_msg_rx_status(priv)) { void *rx_head; @@ -4303,6 +4347,42 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) len -= ETH_FCS_LEN; } + if (!skb) { + dma_sync_single_for_cpu(priv->device, buf->addr, + buf1_len, dma_dir); + + xdp.data = page_address(buf->page) + buf->page_offset; + xdp.data_end = xdp.data + len; + xdp.data_hard_start = page_address(buf->page); + xdp_set_data_meta_invalid(&xdp); + xdp.frame_sz = buf_sz; + + skb = stmmac_xdp_run_prog(priv, &xdp); + + /* For Not XDP_PASS verdict */ + if (IS_ERR(skb)) { + unsigned int xdp_res = -PTR_ERR(skb); + + if (xdp_res & STMMAC_XDP_CONSUMED) { + page_pool_recycle_direct(rx_q->page_pool, + buf->page); + buf->page = NULL; + priv->dev->stats.rx_dropped++; + + /* Clear skb as it was set as + * status by XDP program. + */ + skb = NULL; + + if (unlikely((status & rx_not_ls))) + goto read_again; + + count++; + continue; + } + } + } + if (!skb) { skb = napi_alloc_skb(&ch->rx_napi, buf1_len); if (!skb) { @@ -4311,9 +4391,8 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) goto drain_data; } - dma_sync_single_for_cpu(priv->device, buf->addr, - buf1_len, DMA_FROM_DEVICE); - skb_copy_to_linear_data(skb, page_address(buf->page), + skb_copy_to_linear_data(skb, page_address(buf->page) + + buf->page_offset, buf1_len); skb_put(skb, buf1_len); @@ -4322,9 +4401,9 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) buf->page = NULL; } else if (buf1_len) { dma_sync_single_for_cpu(priv->device, buf->addr, - buf1_len, DMA_FROM_DEVICE); + buf1_len, dma_dir); skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, - buf->page, 0, buf1_len, + buf->page, buf->page_offset, buf1_len, priv->dma_buf_sz); /* Data payload appended into SKB */ @@ -4334,7 +4413,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) if (buf2_len) { dma_sync_single_for_cpu(priv->device, buf->sec_addr, - buf2_len, DMA_FROM_DEVICE); + buf2_len, dma_dir); skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, buf->sec_page, 0, buf2_len, priv->dma_buf_sz); @@ -4492,6 +4571,11 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu) return -EBUSY; } + if (stmmac_xdp_is_enabled(priv) && new_mtu > ETH_DATA_LEN) { + netdev_dbg(priv->dev, "Jumbo frames not supported for XDP\n"); + return -EINVAL; + } + new_mtu = STMMAC_ALIGN(new_mtu); /* If condition true, FIFO is too small or MTU too large */ @@ -4553,6 +4637,7 @@ static int stmmac_set_features(struct net_device *netdev, stmmac_rx_ipc(priv, priv->hw); sph_en = (priv->hw->rx_csum > 0) && priv->sph; + for (chan = 0; chan < priv->plat->rx_queues_to_use; chan++) stmmac_enable_sph(priv, priv->ioaddr, sph_en, chan); @@ -5288,6 +5373,18 @@ static int stmmac_vlan_rx_kill_vid(struct net_device *ndev, __be16 proto, u16 vi return ret; } +static int stmmac_bpf(struct net_device *dev, struct netdev_bpf *bpf) +{ + struct stmmac_priv *priv = netdev_priv(dev); + + switch (bpf->command) { + case XDP_SETUP_PROG: + return stmmac_xdp_set_prog(priv, bpf->prog, bpf->extack); + default: + return -EOPNOTSUPP; + } +} + static const struct net_device_ops stmmac_netdev_ops = { .ndo_open = stmmac_open, .ndo_start_xmit = stmmac_xmit, @@ -5306,6 +5403,7 @@ static const struct net_device_ops stmmac_netdev_ops = { .ndo_set_mac_address = stmmac_set_mac_address, .ndo_vlan_rx_add_vid = stmmac_vlan_rx_add_vid, .ndo_vlan_rx_kill_vid = stmmac_vlan_rx_kill_vid, + .ndo_bpf = stmmac_bpf, }; static void stmmac_reset_subtask(struct stmmac_priv *priv) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c new file mode 100644 index 000000000000..bf38d231860b --- /dev/null +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c @@ -0,0 +1,40 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2021, Intel Corporation. */ + +#include "stmmac.h" +#include "stmmac_xdp.h" + +int stmmac_xdp_set_prog(struct stmmac_priv *priv, struct bpf_prog *prog, + struct netlink_ext_ack *extack) +{ + struct net_device *dev = priv->dev; + struct bpf_prog *old_prog; + bool need_update; + bool if_running; + + if_running = netif_running(dev); + + if (prog && dev->mtu > ETH_DATA_LEN) { + /* For now, the driver doesn't support XDP functionality with + * jumbo frames so we return error. + */ + NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported"); + return -EOPNOTSUPP; + } + + need_update = !!priv->xdp_prog != !!prog; + if (if_running && need_update) + stmmac_release(dev); + + old_prog = xchg(&priv->xdp_prog, prog); + if (old_prog) + bpf_prog_put(old_prog); + + /* Disable RX SPH for XDP operation */ + priv->sph = priv->sph_cap && !stmmac_xdp_is_enabled(priv); + + if (if_running && need_update) + stmmac_open(dev); + + return 0; +} diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.h b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.h new file mode 100644 index 000000000000..93948569d92a --- /dev/null +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2021, Intel Corporation. */ + +#ifndef _STMMAC_XDP_H_ +#define _STMMAC_XDP_H_ + +#define STMMAC_MAX_RX_BUF_SIZE(num) (((num) * PAGE_SIZE) - XDP_PACKET_HEADROOM) + +int stmmac_xdp_set_prog(struct stmmac_priv *priv, struct bpf_prog *prog, + struct netlink_ext_ack *extack); + +#endif /* _STMMAC_XDP_H_ */ From patchwork Mon Mar 29 13:40:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ong Boon Leong X-Patchwork-Id: 411197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40542C433ED for ; Mon, 29 Mar 2021 13:37:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1455261932 for ; Mon, 29 Mar 2021 13:37:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231857AbhC2NhD (ORCPT ); Mon, 29 Mar 2021 09:37:03 -0400 Received: from mga03.intel.com ([134.134.136.65]:45673 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231644AbhC2Ngq (ORCPT ); Mon, 29 Mar 2021 09:36:46 -0400 IronPort-SDR: 4GzTL2Mf/8Ai6a3orJEoOjm3pW7/QJIeG2Sndth+xcUUiMhV5NNyMJXPA2GEsbeJvbykuGDtOY eWOYel+y5SbQ== X-IronPort-AV: E=McAfee;i="6000,8403,9938"; a="191578730" X-IronPort-AV: E=Sophos;i="5.81,287,1610438400"; d="scan'208";a="191578730" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2021 06:36:46 -0700 IronPort-SDR: 7lMDQZsSpRJMpqTnWU1W5qOSa+uyupOZijZ4NHFHN0Nsx6aQ/kdTtQGS3/Y7fEVaBHtM/t/knJ P+A1YIt6DoSQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,287,1610438400"; d="scan'208";a="411079441" Received: from glass.png.intel.com ([10.158.65.59]) by fmsmga008.fm.intel.com with ESMTP; 29 Mar 2021 06:36:41 -0700 From: Ong Boon Leong To: Giuseppe Cavallaro , Alexandre Torgue , Jose Abreu , "David S . Miller" , Jakub Kicinski , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Cc: Maxime Coquelin , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , netdev@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Ong Boon Leong Subject: [PATCH net-next 5/6] net: stmmac: Add support for XDP_TX action Date: Mon, 29 Mar 2021 21:40:12 +0800 Message-Id: <20210329134013.9516-6-boon.leong.ong@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210329134013.9516-1-boon.leong.ong@intel.com> References: <20210329134013.9516-1-boon.leong.ong@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch adds support for XDP_TX action which enables XDP program to transmit back received frames. This patch has been tested with the "xdp2" app located in samples/bpf dir. The DUT receives burst traffic packet generated using pktgen script 'pktgen_sample03_burst_single_flow.sh'. Signed-off-by: Ong Boon Leong --- drivers/net/ethernet/stmicro/stmmac/stmmac.h | 12 +- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 220 ++++++++++++++++-- 2 files changed, 214 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h index e72224c8fbac..a93e22a6be59 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h @@ -36,12 +36,18 @@ struct stmmac_resources { int tx_irq[MTL_MAX_TX_QUEUES]; }; +enum stmmac_txbuf_type { + STMMAC_TXBUF_T_SKB, + STMMAC_TXBUF_T_XDP_TX, +}; + struct stmmac_tx_info { dma_addr_t buf; bool map_as_page; unsigned len; bool last_segment; bool is_jumbo; + enum stmmac_txbuf_type buf_type; }; #define STMMAC_TBS_AVAIL BIT(0) @@ -57,7 +63,10 @@ struct stmmac_tx_queue { struct dma_extended_desc *dma_etx ____cacheline_aligned_in_smp; struct dma_edesc *dma_entx; struct dma_desc *dma_tx; - struct sk_buff **tx_skbuff; + union { + struct sk_buff **tx_skbuff; + struct xdp_frame **xdpf; + }; struct stmmac_tx_info *tx_skbuff_dma; unsigned int cur_tx; unsigned int dirty_tx; @@ -77,6 +86,7 @@ struct stmmac_rx_buffer { struct stmmac_rx_queue { u32 rx_count_frames; u32 queue_index; + struct xdp_rxq_info xdp_rxq; struct page_pool *page_pool; struct stmmac_rx_buffer *buf_pool; struct stmmac_priv *priv_data; diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index fd29c36860c9..b92355561609 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -71,6 +71,7 @@ MODULE_PARM_DESC(phyaddr, "Physical device address"); #define STMMAC_XDP_PASS 0 #define STMMAC_XDP_CONSUMED BIT(0) +#define STMMAC_XDP_TX BIT(1) static int flow_ctrl = FLOW_AUTO; module_param(flow_ctrl, int, 0644); @@ -1442,7 +1443,8 @@ static void stmmac_free_tx_buffer(struct stmmac_priv *priv, u32 queue, int i) { struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; - if (tx_q->tx_skbuff_dma[i].buf) { + if (tx_q->tx_skbuff_dma[i].buf && + tx_q->tx_skbuff_dma[i].buf_type != STMMAC_TXBUF_T_XDP_TX) { if (tx_q->tx_skbuff_dma[i].map_as_page) dma_unmap_page(priv->device, tx_q->tx_skbuff_dma[i].buf, @@ -1455,12 +1457,20 @@ static void stmmac_free_tx_buffer(struct stmmac_priv *priv, u32 queue, int i) DMA_TO_DEVICE); } - if (tx_q->tx_skbuff[i]) { + if (tx_q->xdpf[i] && + tx_q->tx_skbuff_dma[i].buf_type == STMMAC_TXBUF_T_XDP_TX) { + xdp_return_frame(tx_q->xdpf[i]); + tx_q->xdpf[i] = NULL; + } + + if (tx_q->tx_skbuff[i] && + tx_q->tx_skbuff_dma[i].buf_type == STMMAC_TXBUF_T_SKB) { dev_kfree_skb_any(tx_q->tx_skbuff[i]); tx_q->tx_skbuff[i] = NULL; - tx_q->tx_skbuff_dma[i].buf = 0; - tx_q->tx_skbuff_dma[i].map_as_page = false; } + + tx_q->tx_skbuff_dma[i].buf = 0; + tx_q->tx_skbuff_dma[i].map_as_page = false; } /** @@ -1568,6 +1578,7 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) for (queue = 0; queue < rx_count; queue++) { struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + int ret; netif_dbg(priv, probe, priv->dev, "(%s) dma_rx_phy=0x%08x\n", __func__, @@ -1575,6 +1586,14 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags) stmmac_clear_rx_descriptors(priv, queue); + WARN_ON(xdp_rxq_info_reg_mem_model(&rx_q->xdp_rxq, + MEM_TYPE_PAGE_POOL, + rx_q->page_pool)); + + netdev_info(priv->dev, + "Register MEM_TYPE_PAGE_POOL RxQ-%d\n", + rx_q->queue_index); + for (i = 0; i < priv->dma_rx_size; i++) { struct dma_desc *p; @@ -1775,6 +1794,9 @@ static void free_dma_rx_desc_resources(struct stmmac_priv *priv) sizeof(struct dma_extended_desc), rx_q->dma_erx, rx_q->dma_rx_phy); + if (xdp_rxq_info_is_reg(&rx_q->xdp_rxq)) + xdp_rxq_info_unreg(&rx_q->xdp_rxq); + kfree(rx_q->buf_pool); if (rx_q->page_pool) page_pool_destroy(rx_q->page_pool); @@ -1837,8 +1859,10 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv) /* RX queues buffers and DMA */ for (queue = 0; queue < rx_count; queue++) { struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; + struct stmmac_channel *ch = &priv->channel[queue]; struct page_pool_params pp_params = { 0 }; unsigned int num_pages; + int ret; rx_q->queue_index = queue; rx_q->priv_data = priv; @@ -1884,6 +1908,14 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv) if (!rx_q->dma_rx) goto err_dma; } + + ret = xdp_rxq_info_reg(&rx_q->xdp_rxq, priv->dev, + rx_q->queue_index, + ch->rx_napi.napi_id); + if (ret) { + netdev_err(priv->dev, "Failed to register xdp rxq info\n"); + goto err_dma; + } } return 0; @@ -1985,11 +2017,13 @@ static int alloc_dma_desc_resources(struct stmmac_priv *priv) */ static void free_dma_desc_resources(struct stmmac_priv *priv) { - /* Release the DMA RX socket buffers */ - free_dma_rx_desc_resources(priv); - /* Release the DMA TX socket buffers */ free_dma_tx_desc_resources(priv); + + /* Release the DMA RX socket buffers later + * to ensure all pending XDP_TX buffers are returned. + */ + free_dma_rx_desc_resources(priv); } /** @@ -2181,10 +2215,22 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) entry = tx_q->dirty_tx; while ((entry != tx_q->cur_tx) && (count < budget)) { - struct sk_buff *skb = tx_q->tx_skbuff[entry]; + struct xdp_frame *xdpf; + struct sk_buff *skb; struct dma_desc *p; int status; + if (tx_q->tx_skbuff_dma[entry].buf_type == STMMAC_TXBUF_T_XDP_TX) { + xdpf = tx_q->xdpf[entry]; + skb = NULL; + } else if (tx_q->tx_skbuff_dma[entry].buf_type == STMMAC_TXBUF_T_SKB) { + xdpf = NULL; + skb = tx_q->tx_skbuff[entry]; + } else { + xdpf = NULL; + skb = NULL; + } + if (priv->extend_desc) p = (struct dma_desc *)(tx_q->dma_etx + entry); else if (tx_q->tbs & STMMAC_TBS_AVAIL) @@ -2214,10 +2260,12 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) priv->dev->stats.tx_packets++; priv->xstats.tx_pkt_n++; } - stmmac_get_tx_hwtstamp(priv, p, skb); + if (skb) + stmmac_get_tx_hwtstamp(priv, p, skb); } - if (likely(tx_q->tx_skbuff_dma[entry].buf)) { + if (likely(tx_q->tx_skbuff_dma[entry].buf && + tx_q->tx_skbuff_dma[entry].buf_type != STMMAC_TXBUF_T_XDP_TX)) { if (tx_q->tx_skbuff_dma[entry].map_as_page) dma_unmap_page(priv->device, tx_q->tx_skbuff_dma[entry].buf, @@ -2238,11 +2286,19 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) tx_q->tx_skbuff_dma[entry].last_segment = false; tx_q->tx_skbuff_dma[entry].is_jumbo = false; - if (likely(skb != NULL)) { - pkts_compl++; - bytes_compl += skb->len; - dev_consume_skb_any(skb); - tx_q->tx_skbuff[entry] = NULL; + if (xdpf && + tx_q->tx_skbuff_dma[entry].buf_type == STMMAC_TXBUF_T_XDP_TX) { + xdp_return_frame_rx_napi(xdpf); + tx_q->xdpf[entry] = NULL; + } + + if (tx_q->tx_skbuff_dma[entry].buf_type == STMMAC_TXBUF_T_SKB) { + if (likely(skb)) { + pkts_compl++; + bytes_compl += skb->len; + dev_consume_skb_any(skb); + tx_q->tx_skbuff[entry] = NULL; + } } stmmac_release_tx_desc(priv, p, priv->mode); @@ -3656,6 +3712,8 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) tx_q->tx_skbuff_dma[first_entry].buf = des; tx_q->tx_skbuff_dma[first_entry].len = skb_headlen(skb); + tx_q->tx_skbuff_dma[first_entry].map_as_page = false; + tx_q->tx_skbuff_dma[first_entry].buf_type = STMMAC_TXBUF_T_SKB; if (priv->dma_cap.addr64 <= 32) { first->des0 = cpu_to_le32(des); @@ -3691,12 +3749,14 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) tx_q->tx_skbuff_dma[tx_q->cur_tx].buf = des; tx_q->tx_skbuff_dma[tx_q->cur_tx].len = skb_frag_size(frag); tx_q->tx_skbuff_dma[tx_q->cur_tx].map_as_page = true; + tx_q->tx_skbuff_dma[tx_q->cur_tx].buf_type = STMMAC_TXBUF_T_SKB; } tx_q->tx_skbuff_dma[tx_q->cur_tx].last_segment = true; /* Only the last descriptor gets to point to the skb. */ tx_q->tx_skbuff[tx_q->cur_tx] = skb; + tx_q->tx_skbuff_dma[tx_q->cur_tx].buf_type = STMMAC_TXBUF_T_SKB; /* Manage tx mitigation */ tx_packets = (tx_q->cur_tx + 1) - first_tx; @@ -3903,6 +3963,7 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) tx_q->tx_skbuff_dma[entry].map_as_page = true; tx_q->tx_skbuff_dma[entry].len = len; tx_q->tx_skbuff_dma[entry].last_segment = last_segment; + tx_q->tx_skbuff_dma[entry].buf_type = STMMAC_TXBUF_T_SKB; /* Prepare the descriptor and set the own bit too */ stmmac_prepare_tx_desc(priv, desc, 0, len, csum_insertion, @@ -3911,6 +3972,7 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) /* Only the last descriptor gets to point to the skb. */ tx_q->tx_skbuff[entry] = skb; + tx_q->tx_skbuff_dma[entry].buf_type = STMMAC_TXBUF_T_SKB; /* According to the coalesce parameter the IC bit for the latest * segment is reset and the timer re-started to clean the tx status. @@ -3989,6 +4051,8 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) goto dma_map_err; tx_q->tx_skbuff_dma[first_entry].buf = des; + tx_q->tx_skbuff_dma[first_entry].buf_type = STMMAC_TXBUF_T_SKB; + tx_q->tx_skbuff_dma[first_entry].map_as_page = false; stmmac_set_desc_addr(priv, first, des); @@ -4172,6 +4236,108 @@ static unsigned int stmmac_rx_buf2_len(struct stmmac_priv *priv, return plen - len; } +static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue, + struct xdp_frame *xdpf) +{ + struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; + struct page *page = virt_to_page(xdpf->data); + unsigned int entry = tx_q->cur_tx; + struct dma_desc *tx_desc; + dma_addr_t dma_addr; + bool set_ic; + + if (stmmac_tx_avail(priv, queue) < STMMAC_TX_THRESH(priv)) + return STMMAC_XDP_CONSUMED; + + if (likely(priv->extend_desc)) + tx_desc = (struct dma_desc *)(tx_q->dma_etx + entry); + else if (tx_q->tbs & STMMAC_TBS_AVAIL) + tx_desc = &tx_q->dma_entx[entry].basic; + else + tx_desc = tx_q->dma_tx + entry; + + dma_addr = page_pool_get_dma_addr(page) + sizeof(*xdpf) + + xdpf->headroom; + dma_sync_single_for_device(priv->device, dma_addr, + xdpf->len, DMA_BIDIRECTIONAL); + + tx_q->tx_skbuff_dma[entry].buf_type = STMMAC_TXBUF_T_XDP_TX; + + tx_q->tx_skbuff_dma[entry].buf = dma_addr; + tx_q->tx_skbuff_dma[entry].map_as_page = false; + tx_q->tx_skbuff_dma[entry].len = xdpf->len; + tx_q->tx_skbuff_dma[entry].last_segment = true; + tx_q->tx_skbuff_dma[entry].is_jumbo = false; + + tx_q->xdpf[entry] = xdpf; + + stmmac_set_desc_addr(priv, tx_desc, dma_addr); + + stmmac_prepare_tx_desc(priv, tx_desc, 1, xdpf->len, + true, priv->mode, true, true, + xdpf->len); + + tx_q->tx_count_frames++; + + if (tx_q->tx_count_frames % priv->tx_coal_frames[queue] == 0) + set_ic = true; + else + set_ic = false; + + if (set_ic) { + tx_q->tx_count_frames = 0; + stmmac_set_tx_ic(priv, tx_desc); + priv->xstats.tx_set_ic_bit++; + } + + stmmac_enable_dma_transmission(priv, priv->ioaddr); + + entry = STMMAC_GET_ENTRY(entry, priv->dma_tx_size); + tx_q->cur_tx = entry; + + return STMMAC_XDP_TX; +} + +static int stmmac_xdp_get_tx_queue(struct stmmac_priv *priv, + int cpu) +{ + int index = cpu; + + if (unlikely(index < 0)) + index = 0; + + while (index >= priv->plat->tx_queues_to_use) + index -= priv->plat->tx_queues_to_use; + + return index; +} + +static int stmmac_xdp_xmit_back(struct stmmac_priv *priv, + struct xdp_buff *xdp) +{ + struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp); + int cpu = smp_processor_id(); + struct netdev_queue *nq; + int queue; + int res; + + if (unlikely(!xdpf)) + return -EFAULT; + + queue = stmmac_xdp_get_tx_queue(priv, cpu); + nq = netdev_get_tx_queue(priv->dev, queue); + + __netif_tx_lock(nq, cpu); + res = stmmac_xdp_xmit_xdpf(priv, queue, xdpf); + if (res == STMMAC_XDP_TX) { + stmmac_flush_tx_descriptors(priv, queue); + stmmac_tx_timer_arm(priv, queue); + } + __netif_tx_unlock(nq); + + return res; +} + static struct sk_buff *stmmac_xdp_run_prog(struct stmmac_priv *priv, struct xdp_buff *xdp) { @@ -4192,6 +4358,9 @@ static struct sk_buff *stmmac_xdp_run_prog(struct stmmac_priv *priv, case XDP_PASS: res = STMMAC_XDP_PASS; break; + case XDP_TX: + res = stmmac_xdp_xmit_back(priv, xdp); + break; default: bpf_warn_invalid_xdp_action(act); fallthrough; @@ -4348,6 +4517,8 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) } if (!skb) { + unsigned int pre_len, sync_len; + dma_sync_single_for_cpu(priv->device, buf->addr, buf1_len, dma_dir); @@ -4356,16 +4527,26 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) xdp.data_hard_start = page_address(buf->page); xdp_set_data_meta_invalid(&xdp); xdp.frame_sz = buf_sz; + xdp.rxq = &rx_q->xdp_rxq; + pre_len = xdp.data_end - xdp.data_hard_start - + buf->page_offset; skb = stmmac_xdp_run_prog(priv, &xdp); + /* Due xdp_adjust_tail: DMA sync for_device + * cover max len CPU touch + */ + sync_len = xdp.data_end - xdp.data_hard_start - + buf->page_offset; + sync_len = max(sync_len, pre_len); /* For Not XDP_PASS verdict */ if (IS_ERR(skb)) { unsigned int xdp_res = -PTR_ERR(skb); if (xdp_res & STMMAC_XDP_CONSUMED) { - page_pool_recycle_direct(rx_q->page_pool, - buf->page); + page_pool_put_page(rx_q->page_pool, + virt_to_head_page(xdp.data), + sync_len, true); buf->page = NULL; priv->dev->stats.rx_dropped++; @@ -4379,6 +4560,11 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) count++; continue; + } else if (xdp_res & STMMAC_XDP_TX) { + buf->page = NULL; + skb = NULL; + count++; + continue; } } }