From patchwork Fri Nov 6 18:19:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 322212 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52D72C388F2 for ; Fri, 6 Nov 2020 18:19:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EC206208FE for ; Fri, 6 Nov 2020 18:19:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604686774; bh=JzoQF/U4oBqZvT0mEze2X2DMBVxxavBX8XGMUUuK9m8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=tOeHs9lNm9XjwhkCJEcJQKDzStiAwIa2yfAStknR6yoj8dTCBJGIHcvDYWXUKlNRT YMpIGnzph+K6GwmZUgsm0R53c+ObifabHTNH162zFT5Xk26xRJvpVd4pAry+o8UjhS H60gs4Ke5G+H109E3ofvoMCFm2RhBPmqSzTf1K4w= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727771AbgKFSTd (ORCPT ); Fri, 6 Nov 2020 13:19:33 -0500 Received: from mail.kernel.org ([198.145.29.99]:41166 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726415AbgKFSTc (ORCPT ); Fri, 6 Nov 2020 13:19:32 -0500 Received: from localhost.localdomain (unknown [151.66.8.153]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7EBC420853; Fri, 6 Nov 2020 18:19:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604686771; bh=JzoQF/U4oBqZvT0mEze2X2DMBVxxavBX8XGMUUuK9m8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U4YOeHSviB4jZwL9EM/QAOhga1JEntYN4zuAaes9HHF9/0Uf44ZSDD0zvCH3DxAtz sSovHzpmQ1W/FR0nK3dCAbl9qdMIQo2VtyOcE7OF++6971yDj0+RYl0vkM1F7xyDK8 h7ABTyuUw9jiZh8TNYi5G98S4LdorYg4OMm9DXB8= From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, brouer@redhat.com, ilias.apalodimas@linaro.org Subject: [PATCH v4 net-next 1/5] net: xdp: introduce bulking for xdp tx return path Date: Fri, 6 Nov 2020 19:19:07 +0100 Message-Id: <3764c855c42d719000aa56bb946b3ddfd00971f2.1604686496.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org XDP bulk APIs introduce a defer/flush mechanism to return pages belonging to the same xdp_mem_allocator object (identified via the mem.id field) in bulk to optimize I-cache and D-cache since xdp_return_frame is usually run inside the driver NAPI tx completion loop. The bulk queue size is set to 16 to be aligned to how XDP_REDIRECT bulking works. The bulk is flushed when it is full or when mem.id changes. xdp_frame_bulk is usually stored/allocated on the function call-stack to avoid locking penalties. Current implementation considers only page_pool memory model. Suggested-by: Jesper Dangaard Brouer Signed-off-by: Lorenzo Bianconi --- include/net/xdp.h | 11 ++++++++- net/core/xdp.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 71 insertions(+), 1 deletion(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 3814fb631d52..f89292ca305a 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -104,6 +104,12 @@ struct xdp_frame { struct net_device *dev_rx; /* used by cpumap */ }; +#define XDP_BULK_QUEUE_SIZE 16 +struct xdp_frame_bulk { + int count; + void *xa; + void *q[XDP_BULK_QUEUE_SIZE]; +}; static inline struct skb_shared_info * xdp_get_shared_info_from_frame(struct xdp_frame *frame) @@ -194,6 +200,9 @@ struct xdp_frame *xdp_convert_buff_to_frame(struct xdp_buff *xdp) void xdp_return_frame(struct xdp_frame *xdpf); void xdp_return_frame_rx_napi(struct xdp_frame *xdpf); void xdp_return_buff(struct xdp_buff *xdp); +void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq); +void xdp_return_frame_bulk(struct xdp_frame *xdpf, + struct xdp_frame_bulk *bq); /* When sending xdp_frame into the network stack, then there is no * return point callback, which is needed to release e.g. DMA-mapping @@ -245,6 +254,6 @@ bool xdp_attachment_flags_ok(struct xdp_attachment_info *info, void xdp_attachment_setup(struct xdp_attachment_info *info, struct netdev_bpf *bpf); -#define DEV_MAP_BULK_SIZE 16 +#define DEV_MAP_BULK_SIZE XDP_BULK_QUEUE_SIZE #endif /* __LINUX_NET_XDP_H__ */ diff --git a/net/core/xdp.c b/net/core/xdp.c index 48aba933a5a8..66ac275a0360 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -380,6 +380,67 @@ void xdp_return_frame_rx_napi(struct xdp_frame *xdpf) } EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi); +/* XDP bulk APIs introduce a defer/flush mechanism to return + * pages belonging to the same xdp_mem_allocator object + * (identified via the mem.id field) in bulk to optimize + * I-cache and D-cache. + * The bulk queue size is set to 16 to be aligned to how + * XDP_REDIRECT bulking works. The bulk is flushed when + * it is full or when mem.id changes. + * xdp_frame_bulk is usually stored/allocated on the function + * call-stack to avoid locking penalties. + */ +void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq) +{ + struct xdp_mem_allocator *xa = bq->xa; + int i; + + if (unlikely(!xa)) + return; + + for (i = 0; i < bq->count; i++) { + struct page *page = virt_to_head_page(bq->q[i]); + + page_pool_put_full_page(xa->page_pool, page, false); + } + bq->count = 0; +} +EXPORT_SYMBOL_GPL(xdp_flush_frame_bulk); + +void xdp_return_frame_bulk(struct xdp_frame *xdpf, + struct xdp_frame_bulk *bq) +{ + struct xdp_mem_info *mem = &xdpf->mem; + struct xdp_mem_allocator *xa; + + if (mem->type != MEM_TYPE_PAGE_POOL) { + __xdp_return(xdpf->data, &xdpf->mem, false); + return; + } + + rcu_read_lock(); + + xa = bq->xa; + if (unlikely(!xa)) { + xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); + bq->count = 0; + bq->xa = xa; + } + + if (bq->count == XDP_BULK_QUEUE_SIZE) + xdp_flush_frame_bulk(bq); + + if (mem->id != xa->mem.id) { + xdp_flush_frame_bulk(bq); + bq->xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); + } + + bq->q[bq->count++] = xdpf->data; + + rcu_read_unlock(); +} +EXPORT_SYMBOL_GPL(xdp_return_frame_bulk); + void xdp_return_buff(struct xdp_buff *xdp) { __xdp_return(xdp->data, &xdp->rxq->mem, true); From patchwork Fri Nov 6 18:19:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 322211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E9F2C55178 for ; Fri, 6 Nov 2020 18:19:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 187EA221FA for ; Fri, 6 Nov 2020 18:19:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604686779; bh=uUEsQ3XWf46PyHyYlWfbr6Xq7Y/ZL/U+Ok7983yzNLo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=qLfMyGCzEGVIzZi+okFHbDFjFp3Jx9gO3HqRfqQpVOQiSPMlI3ZlVWfg62QWjSlaH OBAk80wxTmxWeYRM6vm7Bz5mmhhRICufNe7j+sUNgJvtNYdg8dukpDD1TSqQhGQFHg s6l+bPUXaSh2JycHzvGIEGgewKTHZ3njgFYCWM58= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727869AbgKFSTh (ORCPT ); Fri, 6 Nov 2020 13:19:37 -0500 Received: from mail.kernel.org ([198.145.29.99]:41184 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726415AbgKFSTe (ORCPT ); Fri, 6 Nov 2020 13:19:34 -0500 Received: from localhost.localdomain (unknown [151.66.8.153]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0CB6921D46; Fri, 6 Nov 2020 18:19:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604686773; bh=uUEsQ3XWf46PyHyYlWfbr6Xq7Y/ZL/U+Ok7983yzNLo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I3dR92nTBhZ2s8GvtR++IQbrN2B/n7Kc8QniDNEQ+QoWeyJ7rajf/0rWF6s6eBovs kltig+bFpGeaNOu3GFBexHgEx86JJK1UPbHfxZlsd1RPRXeKZPxooKyMtW34rYNzhF A8EOjaDJ9o5Xul2nl46pjDBwiVW+GFkHTee8TCcE= From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, brouer@redhat.com, ilias.apalodimas@linaro.org Subject: [PATCH v4 net-next 2/5] net: page_pool: add bulk support for ptr_ring Date: Fri, 6 Nov 2020 19:19:08 +0100 Message-Id: <1a39bf0efb8c2832245216d7ccd41582c408e9f4.1604686496.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce the capability to batch page_pool ptr_ring refill since it is usually run inside the driver NAPI tx completion loop. Suggested-by: Jesper Dangaard Brouer Signed-off-by: Lorenzo Bianconi --- include/net/page_pool.h | 26 ++++++++++++++++ net/core/page_pool.c | 66 ++++++++++++++++++++++++++++++++++------- net/core/xdp.c | 9 ++---- 3 files changed, 84 insertions(+), 17 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 81d7773f96cd..b5b195305346 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -152,6 +152,8 @@ struct page_pool *page_pool_create(const struct page_pool_params *params); void page_pool_destroy(struct page_pool *pool); void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *)); void page_pool_release_page(struct page_pool *pool, struct page *page); +void page_pool_put_page_bulk(struct page_pool *pool, void **data, + int count); #else static inline void page_pool_destroy(struct page_pool *pool) { @@ -165,6 +167,11 @@ static inline void page_pool_release_page(struct page_pool *pool, struct page *page) { } + +static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data, + int count) +{ +} #endif void page_pool_put_page(struct page_pool *pool, struct page *page, @@ -215,4 +222,23 @@ static inline void page_pool_nid_changed(struct page_pool *pool, int new_nid) if (unlikely(pool->p.nid != new_nid)) page_pool_update_nid(pool, new_nid); } + +static inline void page_pool_ring_lock(struct page_pool *pool) + __acquires(&pool->ring.producer_lock) +{ + if (in_serving_softirq()) + spin_lock(&pool->ring.producer_lock); + else + spin_lock_bh(&pool->ring.producer_lock); +} + +static inline void page_pool_ring_unlock(struct page_pool *pool) + __releases(&pool->ring.producer_lock) +{ + if (in_serving_softirq()) + spin_unlock(&pool->ring.producer_lock); + else + spin_unlock_bh(&pool->ring.producer_lock); +} + #endif /* _NET_PAGE_POOL_H */ diff --git a/net/core/page_pool.c b/net/core/page_pool.c index ef98372facf6..31dac2ad4a1f 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -11,6 +11,8 @@ #include #include +#include + #include #include #include @@ -362,8 +364,9 @@ static bool pool_page_reusable(struct page_pool *pool, struct page *page) * If the page refcnt != 1, then the page will be returned to memory * subsystem. */ -void page_pool_put_page(struct page_pool *pool, struct page *page, - unsigned int dma_sync_size, bool allow_direct) +static struct page * +__page_pool_put_page(struct page_pool *pool, struct page *page, + unsigned int dma_sync_size, bool allow_direct) { /* This allocator is optimized for the XDP mode that uses * one-frame-per-page, but have fallbacks that act like the @@ -379,15 +382,12 @@ void page_pool_put_page(struct page_pool *pool, struct page *page, page_pool_dma_sync_for_device(pool, page, dma_sync_size); - if (allow_direct && in_serving_softirq()) - if (page_pool_recycle_in_cache(page, pool)) - return; + if (allow_direct && in_serving_softirq() && + page_pool_recycle_in_cache(page, pool)) + return NULL; - if (!page_pool_recycle_in_ring(pool, page)) { - /* Cache full, fallback to free pages */ - page_pool_return_page(pool, page); - } - return; + /* page is candidate for bulking */ + return page; } /* Fallback/non-XDP mode: API user have elevated refcnt. * @@ -405,9 +405,55 @@ void page_pool_put_page(struct page_pool *pool, struct page *page, /* Do not replace this with page_pool_return_page() */ page_pool_release_page(pool, page); put_page(page); + + return NULL; +} + +void page_pool_put_page(struct page_pool *pool, struct page *page, + unsigned int dma_sync_size, bool allow_direct) +{ + page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); + if (page && !page_pool_recycle_in_ring(pool, page)) { + /* Cache full, fallback to free pages */ + page_pool_return_page(pool, page); + } } EXPORT_SYMBOL(page_pool_put_page); +/* Caller must not use data area after call, as this function overwrites it */ +void page_pool_put_page_bulk(struct page_pool *pool, void **data, + int count) +{ + int i, bulk_len = 0, pa_len = 0; + + for (i = 0; i < count; i++) { + struct page *page = virt_to_head_page(data[i]); + + page = __page_pool_put_page(pool, page, -1, false); + /* Approved for bulk recycling in ptr_ring cache */ + if (page) + data[bulk_len++] = page; + } + + if (!bulk_len) + return; + + /* Bulk producer into ptr_ring page_pool cache */ + page_pool_ring_lock(pool); + for (i = 0; i < bulk_len; i++) { + if (__ptr_ring_produce(&pool->ring, data[i])) + data[pa_len++] = data[i]; + } + page_pool_ring_unlock(pool); + + /* ptr_ring cache full, free pages outside producer lock since + * put_page() with refcnt == 1 can be an expensive operation + */ + for (i = 0; i < pa_len; i++) + page_pool_return_page(pool, data[i]); +} +EXPORT_SYMBOL(page_pool_put_page_bulk); + static void page_pool_empty_ring(struct page_pool *pool) { struct page *page; diff --git a/net/core/xdp.c b/net/core/xdp.c index 66ac275a0360..ff7c801bd40c 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -393,16 +393,11 @@ EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi); void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq) { struct xdp_mem_allocator *xa = bq->xa; - int i; - if (unlikely(!xa)) + if (unlikely(!xa || !bq->count)) return; - for (i = 0; i < bq->count; i++) { - struct page *page = virt_to_head_page(bq->q[i]); - - page_pool_put_full_page(xa->page_pool, page, false); - } + page_pool_put_page_bulk(xa->page_pool, bq->q, bq->count); bq->count = 0; } EXPORT_SYMBOL_GPL(xdp_flush_frame_bulk); From patchwork Fri Nov 6 18:19:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 322210 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B53FC5517A for ; Fri, 6 Nov 2020 18:19:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3614E221F8 for ; Fri, 6 Nov 2020 18:19:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604686784; bh=2T02Rnj9JSqlSyqTOWwETG3CNvYOciUI7owaOFmtoLQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=cJ0HsXYqIGr+t93rf1epr2lWEpLTsk/Xv+i3HRbsKig8t7X25x8kifWkYyRTFJ9N5 IGXaJ+Z+giciKPHHvc0ULhrn4uMwbEYqMoE6GJYAPDB4jPOY7qkR5/bdCA3CJgOdqf R5FrIL1YMg0EEoSOLzmbR3aq72Dxzs5hQdmL+UF0= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727896AbgKFSTn (ORCPT ); Fri, 6 Nov 2020 13:19:43 -0500 Received: from mail.kernel.org ([198.145.29.99]:41302 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727791AbgKFSTl (ORCPT ); Fri, 6 Nov 2020 13:19:41 -0500 Received: from localhost.localdomain (unknown [151.66.8.153]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 70EE821D81; Fri, 6 Nov 2020 18:19:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604686781; bh=2T02Rnj9JSqlSyqTOWwETG3CNvYOciUI7owaOFmtoLQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Fn1Rn35R2/YBaN5gXuyPOrwyPYfb1S50e3mdv30EVeZrIv0CLancbyjg7hK/ZA+Fv xp0jIEBW6PUPLV1ChI8L6NmzbmRNjjbBaOOwmsZllVPbV4tJWPLcckDFDyUcsGb4Bp lvph9x3U0I/ij7RNMnOJNfMdzSQJiAOBhRtmX+fw= From: Lorenzo Bianconi To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, lorenzo.bianconi@redhat.com, davem@davemloft.net, kuba@kernel.org, brouer@redhat.com, ilias.apalodimas@linaro.org Subject: [PATCH v4 net-next 5/5] net: mlx5: add xdp tx return bulking support Date: Fri, 6 Nov 2020 19:19:11 +0100 Message-Id: <652d803bf1ffab08ff947da5e915add9fb737846.1604686496.git.lorenzo@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Convert mlx5 driver to xdp_return_frame_bulk APIs. XDP_REDIRECT (upstream codepath): 8.5Mpps XDP_REDIRECT (upstream codepath + bulking APIs): 10.1Mpps Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index ae90d533a350..5fdfbf390d5c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -369,8 +369,10 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, bool recycle) { struct mlx5e_xdp_info_fifo *xdpi_fifo = &sq->db.xdpi_fifo; + struct xdp_frame_bulk bq; u16 i; + bq.xa = NULL; for (i = 0; i < wi->num_pkts; i++) { struct mlx5e_xdp_info xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo); @@ -379,7 +381,7 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, /* XDP_TX from the XSK RQ and XDP_REDIRECT */ dma_unmap_single(sq->pdev, xdpi.frame.dma_addr, xdpi.frame.xdpf->len, DMA_TO_DEVICE); - xdp_return_frame(xdpi.frame.xdpf); + xdp_return_frame_bulk(xdpi.frame.xdpf, &bq); break; case MLX5E_XDP_XMIT_MODE_PAGE: /* XDP_TX from the regular RQ */ @@ -393,6 +395,7 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, WARN_ON_ONCE(true); } } + xdp_flush_frame_bulk(&bq); } bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq)