From patchwork Thu Dec 10 01:02:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 341702 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F32C9C4361B for ; Thu, 10 Dec 2020 01:04:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CACEC239FD for ; Thu, 10 Dec 2020 01:04:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730635AbgLJBEP (ORCPT ); Wed, 9 Dec 2020 20:04:15 -0500 Received: from mga12.intel.com ([192.55.52.136]:15984 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729064AbgLJBDz (ORCPT ); Wed, 9 Dec 2020 20:03:55 -0500 IronPort-SDR: DpeEnnNIQirkzuTevo836QHQBGwOHQpCWiUBts0fqB6Qaoo8XETiHl3cokvLdLToQ/8uxHttM5 gDIDuZc2anuQ== X-IronPort-AV: E=McAfee;i="6000,8403,9830"; a="153414664" X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="153414664" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2020 17:02:59 -0800 IronPort-SDR: CROWgHs31tpuH6KpX9xLR0G2cpR3E+0ZzohVO7t7j05CXs9aWpBJBaK1npqP5FYgjYG2O02IuZ J3Z/BY9SOqyg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="338203373" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga006.jf.intel.com with ESMTP; 09 Dec 2020 17:02:59 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Sven Auhagen , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, Dan Carpenter , Maciej Fijalkowski , Sandeep Penigalapati Subject: [net 1/9] igb: XDP xmit back fix error code Date: Wed, 9 Dec 2020 17:02:44 -0800 Message-Id: <20201210010252.4029245-2-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> References: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sven Auhagen The igb XDP xmit back function should only return defined error codes. Fixes: 9cbc948b5a20 ("igb: add XDP support") Reported-by: Dan Carpenter Acked-by: Maciej Fijalkowski Signed-off-by: Sven Auhagen Tested-by: Sandeep Penigalapati Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igb/igb_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 5fc2c381da55..08cc6f59aa2e 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -2910,7 +2910,7 @@ static int igb_xdp_xmit_back(struct igb_adapter *adapter, struct xdp_buff *xdp) */ tx_ring = adapter->xdp_prog ? igb_xdp_tx_queue_mapping(adapter) : NULL; if (unlikely(!tx_ring)) - return -ENXIO; + return IGB_XDP_CONSUMED; nq = txring_txq(tx_ring); __netif_tx_lock(nq, cpu); From patchwork Thu Dec 10 01:02:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 341701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24526C4361B for ; Thu, 10 Dec 2020 01:04:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E772F23B26 for ; Thu, 10 Dec 2020 01:04:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731242AbgLJBEf (ORCPT ); Wed, 9 Dec 2020 20:04:35 -0500 Received: from mga12.intel.com ([192.55.52.136]:15984 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730495AbgLJBEM (ORCPT ); Wed, 9 Dec 2020 20:04:12 -0500 IronPort-SDR: 93pXWcx15Rk/Smo4of5eEmXUxS2R1xGkoZ/IoN8BU/MlpToXjVyBa86kt1rl21clyuWSc9suUN hzziit6fuPTw== X-IronPort-AV: E=McAfee;i="6000,8403,9830"; a="153414666" X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="153414666" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2020 17:02:59 -0800 IronPort-SDR: 7RDmZhzhTZt0i5mmxYXGPH/B+rtNRfyQNcjX2UnjQsSDFJ3FirV6z/1J5nUuw3xZGoViODDY0X +lmYx8Xzwq8w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="338203376" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga006.jf.intel.com with ESMTP; 09 Dec 2020 17:02:59 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Sven Auhagen , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, Maciej Fijalkowski , Sandeep Penigalapati Subject: [net 2/9] igb: take VLAN double header into account Date: Wed, 9 Dec 2020 17:02:45 -0800 Message-Id: <20201210010252.4029245-3-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> References: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sven Auhagen Increase the packet header padding to include double VLAN tagging. This patch uses a macro for this. Fixes: 9cbc948b5a20 ("igb: add XDP support") Suggested-by: Maciej Fijalkowski Reviewed-by: Maciej Fijalkowski Acked-by: Maciej Fijalkowski Signed-off-by: Sven Auhagen Tested-by: Sandeep Penigalapati Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igb/igb.h | 5 +++++ drivers/net/ethernet/intel/igb/igb_main.c | 7 +++---- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h index 0286d2fceee4..aaa954aae574 100644 --- a/drivers/net/ethernet/intel/igb/igb.h +++ b/drivers/net/ethernet/intel/igb/igb.h @@ -138,6 +138,8 @@ struct vf_mac_filter { /* this is the size past which hardware will drop packets when setting LPE=0 */ #define MAXIMUM_ETHERNET_VLAN_SIZE 1522 +#define IGB_ETH_PKT_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2)) + /* Supported Rx Buffer Sizes */ #define IGB_RXBUFFER_256 256 #define IGB_RXBUFFER_1536 1536 @@ -247,6 +249,9 @@ enum igb_tx_flags { #define IGB_SFF_ADDRESSING_MODE 0x4 #define IGB_SFF_8472_UNSUP 0x00 +/* TX resources are shared between XDP and netstack + * and we need to tag the buffer type to distinguish them + */ enum igb_tx_buf_type { IGB_TYPE_SKB = 0, IGB_TYPE_XDP, diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 08cc6f59aa2e..0a9198037b98 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -2826,7 +2826,7 @@ static int igb_setup_tc(struct net_device *dev, enum tc_setup_type type, static int igb_xdp_setup(struct net_device *dev, struct bpf_prog *prog) { - int i, frame_size = dev->mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN; + int i, frame_size = dev->mtu + IGB_ETH_PKT_HDR_PAD; struct igb_adapter *adapter = netdev_priv(dev); bool running = netif_running(dev); struct bpf_prog *old_prog; @@ -3950,8 +3950,7 @@ static int igb_sw_init(struct igb_adapter *adapter) /* set default work limits */ adapter->tx_work_limit = IGB_DEFAULT_TX_WORK; - adapter->max_frame_size = netdev->mtu + ETH_HLEN + ETH_FCS_LEN + - VLAN_HLEN; + adapter->max_frame_size = netdev->mtu + IGB_ETH_PKT_HDR_PAD; adapter->min_frame_size = ETH_ZLEN + ETH_FCS_LEN; spin_lock_init(&adapter->nfc_lock); @@ -6491,7 +6490,7 @@ static void igb_get_stats64(struct net_device *netdev, static int igb_change_mtu(struct net_device *netdev, int new_mtu) { struct igb_adapter *adapter = netdev_priv(netdev); - int max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN; + int max_frame = new_mtu + IGB_ETH_PKT_HDR_PAD; if (adapter->xdp_prog) { int i; From patchwork Thu Dec 10 01:02:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 342704 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C3EEC4361B for ; Thu, 10 Dec 2020 01:05:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C989923B26 for ; Thu, 10 Dec 2020 01:05:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731602AbgLJBF0 (ORCPT ); Wed, 9 Dec 2020 20:05:26 -0500 Received: from mga12.intel.com ([192.55.52.136]:16049 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730978AbgLJBEV (ORCPT ); Wed, 9 Dec 2020 20:04:21 -0500 IronPort-SDR: irg5+3Mn18XJVEnX2d63X+Uf2eAp41DMuS601YuuGQ+9jMgQbaPSLcDKKV1J9cTvhqfBQ8mQmZ QAlXWP1X6ZLA== X-IronPort-AV: E=McAfee;i="6000,8403,9830"; a="153414668" X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="153414668" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2020 17:03:00 -0800 IronPort-SDR: AbS2yP6PWcI8Izlgk9WZWPtfEvARjkeeAgn9XHk02a/w0X20XOK+ja+6HhMcSDNGr8Qa7Y4rDe MV9TWeem8k6Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="338203380" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga006.jf.intel.com with ESMTP; 09 Dec 2020 17:02:59 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Sven Auhagen , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, Maciej Fijalkowski , Sandeep Penigalapati Subject: [net 3/9] igb: XDP extack message on error Date: Wed, 9 Dec 2020 17:02:46 -0800 Message-Id: <20201210010252.4029245-4-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> References: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sven Auhagen Add an extack error message when the RX buffer size is too small for the frame size. Fixes: 9cbc948b5a20 ("igb: add XDP support") Suggested-by: Maciej Fijalkowski Reviewed-by: Maciej Fijalkowski Acked-by: Maciej Fijalkowski Signed-off-by: Sven Auhagen Tested-by: Sandeep Penigalapati Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igb/igb_main.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 0a9198037b98..a0a310a75cc5 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -2824,20 +2824,25 @@ static int igb_setup_tc(struct net_device *dev, enum tc_setup_type type, } } -static int igb_xdp_setup(struct net_device *dev, struct bpf_prog *prog) +static int igb_xdp_setup(struct net_device *dev, struct netdev_bpf *bpf) { int i, frame_size = dev->mtu + IGB_ETH_PKT_HDR_PAD; struct igb_adapter *adapter = netdev_priv(dev); + struct bpf_prog *prog = bpf->prog, *old_prog; bool running = netif_running(dev); - struct bpf_prog *old_prog; bool need_reset; /* verify igb ring attributes are sufficient for XDP */ for (i = 0; i < adapter->num_rx_queues; i++) { struct igb_ring *ring = adapter->rx_ring[i]; - if (frame_size > igb_rx_bufsz(ring)) + if (frame_size > igb_rx_bufsz(ring)) { + NL_SET_ERR_MSG_MOD(bpf->extack, + "The RX buffer size is too small for the frame size"); + netdev_warn(dev, "XDP RX buffer size %d is too small for the frame size %d\n", + igb_rx_bufsz(ring), frame_size); return -EINVAL; + } } old_prog = xchg(&adapter->xdp_prog, prog); @@ -2869,7 +2874,7 @@ static int igb_xdp(struct net_device *dev, struct netdev_bpf *xdp) { switch (xdp->command) { case XDP_SETUP_PROG: - return igb_xdp_setup(dev, xdp->prog); + return igb_xdp_setup(dev, xdp); default: return -EINVAL; } @@ -6499,7 +6504,9 @@ static int igb_change_mtu(struct net_device *netdev, int new_mtu) struct igb_ring *ring = adapter->rx_ring[i]; if (max_frame > igb_rx_bufsz(ring)) { - netdev_warn(adapter->netdev, "Requested MTU size is not supported with XDP\n"); + netdev_warn(adapter->netdev, + "Requested MTU size is not supported with XDP. Max frame size is %d\n", + max_frame); return -EINVAL; } } From patchwork Thu Dec 10 01:02:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 341699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93A93C4361B for ; Thu, 10 Dec 2020 01:05:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 597BF239FD for ; Thu, 10 Dec 2020 01:05:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731386AbgLJBEu (ORCPT ); Wed, 9 Dec 2020 20:04:50 -0500 Received: from mga12.intel.com ([192.55.52.136]:15984 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731069AbgLJBEb (ORCPT ); Wed, 9 Dec 2020 20:04:31 -0500 IronPort-SDR: ZHroS4JpGCmgb23l94i/PZndT2+l9TU2hSbhSrL5VxF3e0PDwSZfwOU5f1uVdezqvL2sxgZMYe v0PaIBL2UIBA== X-IronPort-AV: E=McAfee;i="6000,8403,9830"; a="153414669" X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="153414669" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2020 17:03:00 -0800 IronPort-SDR: P4MdqiBcGQrStOBsOjCRKafVkE9bTBLnpeJJTVM7msqjKwwR6Xcce4Q4j3uY9Z7i7I4261p33A xgp479NTyM6w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="338203385" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga006.jf.intel.com with ESMTP; 09 Dec 2020 17:02:59 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Sven Auhagen , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, Maciej Fijalkowski , Sandeep Penigalapati Subject: [net 4/9] igb: skb add metasize for xdp Date: Wed, 9 Dec 2020 17:02:47 -0800 Message-Id: <20201210010252.4029245-5-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> References: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sven Auhagen add metasize if it is set in xdp Fixes: 9cbc948b5a20 ("igb: add XDP support") Suggested-by: Maciej Fijalkowski Reviewed-by: Maciej Fijalkowski Acked-by: Maciej Fijalkowski Signed-off-by: Sven Auhagen Tested-by: Sandeep Penigalapati Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igb/igb_main.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index a0a310a75cc5..8e412aaba0e3 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -8357,6 +8357,7 @@ static struct sk_buff *igb_build_skb(struct igb_ring *rx_ring, SKB_DATA_ALIGN(xdp->data_end - xdp->data_hard_start); #endif + unsigned int metasize = xdp->data - xdp->data_meta; struct sk_buff *skb; /* prefetch first cache line of first page */ @@ -8371,6 +8372,9 @@ static struct sk_buff *igb_build_skb(struct igb_ring *rx_ring, skb_reserve(skb, xdp->data - xdp->data_hard_start); __skb_put(skb, xdp->data_end - xdp->data); + if (metasize) + skb_metadata_set(skb, metasize); + /* pull timestamp out of packet data */ if (igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP)) { igb_ptp_rx_pktstamp(rx_ring->q_vector, skb->data, skb); From patchwork Thu Dec 10 01:02:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 341700 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D176AC433FE for ; Thu, 10 Dec 2020 01:05:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9DD0C239FD for ; Thu, 10 Dec 2020 01:05:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731448AbgLJBE5 (ORCPT ); Wed, 9 Dec 2020 20:04:57 -0500 Received: from mga12.intel.com ([192.55.52.136]:16074 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731295AbgLJBEh (ORCPT ); Wed, 9 Dec 2020 20:04:37 -0500 IronPort-SDR: 5P0HA1tnnybxhu73yupPcGEu6gCN8uDK9ABRMBZzzycMtqxUuDd1RP/H7aB4p1BHbvDauVI+AD yjeH8oRr1z/A== X-IronPort-AV: E=McAfee;i="6000,8403,9830"; a="153414670" X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="153414670" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2020 17:03:00 -0800 IronPort-SDR: Egd5zsymvsRc1kY6BJwKCGJd0wKcjF+YE2WeOYoPx6KERk6wvMXLeq37egEwBTUlTXPsNQbZqi 4zuBEewSnl8Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="338203389" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga006.jf.intel.com with ESMTP; 09 Dec 2020 17:02:59 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Sven Auhagen , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, Maciej Fijalkowski , Sandeep Penigalapati Subject: [net 5/9] igb: use xdp_do_flush Date: Wed, 9 Dec 2020 17:02:48 -0800 Message-Id: <20201210010252.4029245-6-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> References: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sven Auhagen Since it is a new XDP implementation change xdp_do_flush_map to xdp_do_flush. Fixes: 9cbc948b5a20 ("igb: add XDP support") Suggested-by: Maciej Fijalkowski Reviewed-by: Maciej Fijalkowski Acked-by: Maciej Fijalkowski Signed-off-by: Sven Auhagen Tested-by: Sandeep Penigalapati Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igb/igb_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 8e412aaba0e3..af6ace6c0f87 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -8781,7 +8781,7 @@ static int igb_clean_rx_irq(struct igb_q_vector *q_vector, const int budget) rx_ring->skb = skb; if (xdp_xmit & IGB_XDP_REDIR) - xdp_do_flush_map(); + xdp_do_flush(); if (xdp_xmit & IGB_XDP_TX) { struct igb_ring *tx_ring = igb_xdp_tx_queue_mapping(adapter); From patchwork Thu Dec 10 01:02:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 342705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EF50C4361B for ; Thu, 10 Dec 2020 01:05:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5CB5123B45 for ; Thu, 10 Dec 2020 01:05:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731095AbgLJBFG (ORCPT ); Wed, 9 Dec 2020 20:05:06 -0500 Received: from mga12.intel.com ([192.55.52.136]:16049 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731357AbgLJBEh (ORCPT ); Wed, 9 Dec 2020 20:04:37 -0500 IronPort-SDR: h7Z0sRPpbRgCbZivrq1ufZo7FhmVfHA475Zf6FCjhToanBkTJcasGVDS3u0n0smoc4XMCtMGxi WHsnoBVNsDtQ== X-IronPort-AV: E=McAfee;i="6000,8403,9830"; a="153414672" X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="153414672" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2020 17:03:00 -0800 IronPort-SDR: b07jDUvC6+Raf2S1NJ4Wv3SBtfGWOpQ0UnhYlJF5QSYbHd/UgboXm0ZglbEyFHaY4BeJ0rlcGx duodtND0eGkA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="338203392" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga006.jf.intel.com with ESMTP; 09 Dec 2020 17:02:59 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Sven Auhagen , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, Maciej Fijalkowski , Sandeep Penigalapati Subject: [net 6/9] igb: avoid transmit queue timeout in xdp path Date: Wed, 9 Dec 2020 17:02:49 -0800 Message-Id: <20201210010252.4029245-7-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> References: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Sven Auhagen Since we share the transmit queue with the network stack, it is possible that we run into a transmit queue timeout. This will reset the queue. This happens under high load when XDP is using the transmit queue pretty much exclusively. netdev_start_xmit() sets the trans_start variable of the transmit queue to jiffies which is later utilized by dev_watchdog(), so to avoid timeout, let stack know that XDP xmit happened by bumping the trans_start within XDP Tx routines to jiffies. Fixes: 9cbc948b5a20 ("igb: add XDP support") Acked-by: Maciej Fijalkowski Signed-off-by: Sven Auhagen Tested-by: Sandeep Penigalapati Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igb/igb_main.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index af6ace6c0f87..0d343d050973 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -2919,6 +2919,8 @@ static int igb_xdp_xmit_back(struct igb_adapter *adapter, struct xdp_buff *xdp) nq = txring_txq(tx_ring); __netif_tx_lock(nq, cpu); + /* Avoid transmit queue timeout since we share it with the slow path */ + nq->trans_start = jiffies; ret = igb_xmit_xdp_ring(adapter, tx_ring, xdpf); __netif_tx_unlock(nq); @@ -2951,6 +2953,9 @@ static int igb_xdp_xmit(struct net_device *dev, int n, nq = txring_txq(tx_ring); __netif_tx_lock(nq, cpu); + /* Avoid transmit queue timeout since we share it with the slow path */ + nq->trans_start = jiffies; + for (i = 0; i < n; i++) { struct xdp_frame *xdpf = frames[i]; int err; From patchwork Thu Dec 10 01:02:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 342706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CC4BC4361B for ; Thu, 10 Dec 2020 01:04:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3BC7F23B26 for ; Thu, 10 Dec 2020 01:04:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731287AbgLJBEh (ORCPT ); Wed, 9 Dec 2020 20:04:37 -0500 Received: from mga18.intel.com ([134.134.136.126]:24733 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731002AbgLJBEW (ORCPT ); Wed, 9 Dec 2020 20:04:22 -0500 IronPort-SDR: Y7aIBpQ50Xw/L/NCmCZXL1buumHqW+0MX5DFatU1r2LodMpmkSXnOXLy7w3xIKlKzfONthX/Oe HkKmj4gaa4Xg== X-IronPort-AV: E=McAfee;i="6000,8403,9830"; a="161937222" X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="161937222" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2020 17:03:00 -0800 IronPort-SDR: GboAISJEHlZEjPNCogGFysoLKjC1ssg9PXZST+10G11EdAQQxejalPd+WjjFiaQbgF13BxIfbd +3yv66Ba4bWQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="338203399" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga006.jf.intel.com with ESMTP; 09 Dec 2020 17:02:59 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, Li RongQing , Sandeep Penigalapati Subject: [net 8/9] ixgbe: avoid premature Rx buffer reuse Date: Wed, 9 Dec 2020 17:02:51 -0800 Message-Id: <20201210010252.4029245-9-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> References: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel The page recycle code, incorrectly, relied on that a page fragment could not be freed inside xdp_do_redirect(). This assumption leads to that page fragments that are used by the stack/XDP redirect can be reused and overwritten. To avoid this, store the page count prior invoking xdp_do_redirect(). Fixes: 6453073987ba ("ixgbe: add initial support for xdp redirect") Reported-and-analyzed-by: Li RongQing Signed-off-by: Björn Töpel Tested-by: Sandeep Penigalapati Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 24 +++++++++++++------ 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 45ae33e15303..f3f449f53920 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -1945,7 +1945,8 @@ static inline bool ixgbe_page_is_reserved(struct page *page) return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); } -static bool ixgbe_can_reuse_rx_page(struct ixgbe_rx_buffer *rx_buffer) +static bool ixgbe_can_reuse_rx_page(struct ixgbe_rx_buffer *rx_buffer, + int rx_buffer_pgcnt) { unsigned int pagecnt_bias = rx_buffer->pagecnt_bias; struct page *page = rx_buffer->page; @@ -1956,7 +1957,7 @@ static bool ixgbe_can_reuse_rx_page(struct ixgbe_rx_buffer *rx_buffer) #if (PAGE_SIZE < 8192) /* if we are only owner of page we can reuse it */ - if (unlikely((page_ref_count(page) - pagecnt_bias) > 1)) + if (unlikely((rx_buffer_pgcnt - pagecnt_bias) > 1)) return false; #else /* The last offset is a bit aggressive in that we assume the @@ -2021,11 +2022,18 @@ static void ixgbe_add_rx_frag(struct ixgbe_ring *rx_ring, static struct ixgbe_rx_buffer *ixgbe_get_rx_buffer(struct ixgbe_ring *rx_ring, union ixgbe_adv_rx_desc *rx_desc, struct sk_buff **skb, - const unsigned int size) + const unsigned int size, + int *rx_buffer_pgcnt) { struct ixgbe_rx_buffer *rx_buffer; rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; + *rx_buffer_pgcnt = +#if (PAGE_SIZE < 8192) + page_count(rx_buffer->page); +#else + 0; +#endif prefetchw(rx_buffer->page); *skb = rx_buffer->skb; @@ -2055,9 +2063,10 @@ static struct ixgbe_rx_buffer *ixgbe_get_rx_buffer(struct ixgbe_ring *rx_ring, static void ixgbe_put_rx_buffer(struct ixgbe_ring *rx_ring, struct ixgbe_rx_buffer *rx_buffer, - struct sk_buff *skb) + struct sk_buff *skb, + int rx_buffer_pgcnt) { - if (ixgbe_can_reuse_rx_page(rx_buffer)) { + if (ixgbe_can_reuse_rx_page(rx_buffer, rx_buffer_pgcnt)) { /* hand second half of page back to the ring */ ixgbe_reuse_rx_page(rx_ring, rx_buffer); } else { @@ -2303,6 +2312,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, union ixgbe_adv_rx_desc *rx_desc; struct ixgbe_rx_buffer *rx_buffer; struct sk_buff *skb; + int rx_buffer_pgcnt; unsigned int size; /* return some buffers to hardware, one at a time is too slow */ @@ -2322,7 +2332,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, */ dma_rmb(); - rx_buffer = ixgbe_get_rx_buffer(rx_ring, rx_desc, &skb, size); + rx_buffer = ixgbe_get_rx_buffer(rx_ring, rx_desc, &skb, size, &rx_buffer_pgcnt); /* retrieve a buffer from the ring */ if (!skb) { @@ -2367,7 +2377,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, break; } - ixgbe_put_rx_buffer(rx_ring, rx_buffer, skb); + ixgbe_put_rx_buffer(rx_ring, rx_buffer, skb, rx_buffer_pgcnt); cleaned_count++; /* place incomplete frames back on ring for completion */ From patchwork Thu Dec 10 01:02:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 342707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B113C433FE for ; Thu, 10 Dec 2020 01:04:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E3DF022A99 for ; Thu, 10 Dec 2020 01:04:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726757AbgLJBEJ (ORCPT ); Wed, 9 Dec 2020 20:04:09 -0500 Received: from mga18.intel.com ([134.134.136.126]:24733 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726796AbgLJBDl (ORCPT ); Wed, 9 Dec 2020 20:03:41 -0500 IronPort-SDR: 5ooYmP5X3Jwzy7DvDq0VPCPhwApEi5sKG7Vy/rR7g1pxGib6wXgSrYhfqA46lwh9CMhw7OsihX o5XtvlHPWZfw== X-IronPort-AV: E=McAfee;i="6000,8403,9830"; a="161937223" X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="161937223" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2020 17:03:00 -0800 IronPort-SDR: 3aV6/dVEU+kWeDi3XfiS+3XLHEYPKQVSJeaPkwGZyvqwPdarPzoMzgCxQr9NrAStkcwnBsyXIl 8p2ADL7xO12A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,407,1599548400"; d="scan'208";a="338203402" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga006.jf.intel.com with ESMTP; 09 Dec 2020 17:02:59 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, Li RongQing , George Kuruvinakunnel Subject: [net 9/9] ice: avoid premature Rx buffer reuse Date: Wed, 9 Dec 2020 17:02:52 -0800 Message-Id: <20201210010252.4029245-10-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> References: <20201210010252.4029245-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Björn Töpel The page recycle code, incorrectly, relied on that a page fragment could not be freed inside xdp_do_redirect(). This assumption leads to that page fragments that are used by the stack/XDP redirect can be reused and overwritten. To avoid this, store the page count prior invoking xdp_do_redirect(). Fixes: efc2214b6047 ("ice: Add support for XDP") Reported-and-analyzed-by: Li RongQing Signed-off-by: Björn Töpel Tested-by: George Kuruvinakunnel Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_txrx.c | 31 ++++++++++++++++------- 1 file changed, 22 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index eae75260fe20..23eca2f0a03b 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -762,13 +762,15 @@ ice_rx_buf_adjust_pg_offset(struct ice_rx_buf *rx_buf, unsigned int size) /** * ice_can_reuse_rx_page - Determine if page can be reused for another Rx * @rx_buf: buffer containing the page + * @rx_buf_pgcnt: rx_buf page refcount pre xdp_do_redirect() call * * If page is reusable, we have a green light for calling ice_reuse_rx_page, * which will assign the current buffer to the buffer that next_to_alloc is * pointing to; otherwise, the DMA mapping needs to be destroyed and * page freed */ -static bool ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf) +static bool +ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf, int rx_buf_pgcnt) { unsigned int pagecnt_bias = rx_buf->pagecnt_bias; struct page *page = rx_buf->page; @@ -779,7 +781,7 @@ static bool ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf) #if (PAGE_SIZE < 8192) /* if we are only owner of page we can reuse it */ - if (unlikely((page_count(page) - pagecnt_bias) > 1)) + if (unlikely((rx_buf_pgcnt - pagecnt_bias) > 1)) return false; #else #define ICE_LAST_OFFSET \ @@ -864,17 +866,24 @@ ice_reuse_rx_page(struct ice_ring *rx_ring, struct ice_rx_buf *old_buf) * @rx_ring: Rx descriptor ring to transact packets on * @skb: skb to be used * @size: size of buffer to add to skb + * @rx_buf_pgcnt: rx_buf page refcount * * This function will pull an Rx buffer from the ring and synchronize it * for use by the CPU. */ static struct ice_rx_buf * ice_get_rx_buf(struct ice_ring *rx_ring, struct sk_buff **skb, - const unsigned int size) + const unsigned int size, int *rx_buf_pgcnt) { struct ice_rx_buf *rx_buf; rx_buf = &rx_ring->rx_buf[rx_ring->next_to_clean]; + *rx_buf_pgcnt = +#if (PAGE_SIZE < 8192) + page_count(rx_buf->page); +#else + 0; +#endif prefetchw(rx_buf->page); *skb = rx_buf->skb; @@ -1006,12 +1015,15 @@ ice_construct_skb(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf, * ice_put_rx_buf - Clean up used buffer and either recycle or free * @rx_ring: Rx descriptor ring to transact packets on * @rx_buf: Rx buffer to pull data from + * @rx_buf_pgcnt: Rx buffer page count pre xdp_do_redirect() * * This function will update next_to_clean and then clean up the contents * of the rx_buf. It will either recycle the buffer or unmap it and free * the associated resources. */ -static void ice_put_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf) +static void +ice_put_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf, + int rx_buf_pgcnt) { u16 ntc = rx_ring->next_to_clean + 1; @@ -1022,7 +1034,7 @@ static void ice_put_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf) if (!rx_buf) return; - if (ice_can_reuse_rx_page(rx_buf)) { + if (ice_can_reuse_rx_page(rx_buf, rx_buf_pgcnt)) { /* hand second half of page back to the ring */ ice_reuse_rx_page(rx_ring, rx_buf); } else { @@ -1097,6 +1109,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) struct sk_buff *skb; unsigned int size; u16 stat_err_bits; + int rx_buf_pgcnt; u16 vlan_tag = 0; u8 rx_ptype; @@ -1119,7 +1132,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) dma_rmb(); if (rx_desc->wb.rxdid == FDIR_DESC_RXDID || !rx_ring->netdev) { - ice_put_rx_buf(rx_ring, NULL); + ice_put_rx_buf(rx_ring, NULL, 0); cleaned_count++; continue; } @@ -1128,7 +1141,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) ICE_RX_FLX_DESC_PKT_LEN_M; /* retrieve a buffer from the ring */ - rx_buf = ice_get_rx_buf(rx_ring, &skb, size); + rx_buf = ice_get_rx_buf(rx_ring, &skb, size, &rx_buf_pgcnt); if (!size) { xdp.data = NULL; @@ -1168,7 +1181,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) total_rx_pkts++; cleaned_count++; - ice_put_rx_buf(rx_ring, rx_buf); + ice_put_rx_buf(rx_ring, rx_buf, rx_buf_pgcnt); continue; construct_skb: if (skb) { @@ -1187,7 +1200,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) break; } - ice_put_rx_buf(rx_ring, rx_buf); + ice_put_rx_buf(rx_ring, rx_buf, rx_buf_pgcnt); cleaned_count++; /* skip if it is NOP desc */