From patchwork Wed Aug 18 03:32:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 499619 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67038C4320E for ; Wed, 18 Aug 2021 03:33:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 500A56108E for ; Wed, 18 Aug 2021 03:33:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237281AbhHRDeX (ORCPT ); Tue, 17 Aug 2021 23:34:23 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:8872 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237103AbhHRDeU (ORCPT ); Tue, 17 Aug 2021 23:34:20 -0400 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GqD0h54PCz8sZR; Wed, 18 Aug 2021 11:29:40 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:29 +0800 Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 11:33:29 +0800 From: Yunsheng Lin To: , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH RFC 6/7] net: hns3: support tx recycling in the hns3 driver Date: Wed, 18 Aug 2021 11:32:22 +0800 Message-ID: <1629257542-36145-7-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> References: <1629257542-36145-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Use netif_recyclable_napi_add() to register page pool to the NAPI instance, and avoid doing the DMA mapping/unmapping when the page is from page pool. Signed-off-by: Yunsheng Lin --- drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 32 +++++++++++++++---------- 1 file changed, 19 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index fcbeb1f..ab86566 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -1689,12 +1689,18 @@ static int hns3_map_and_fill_desc(struct hns3_enet_ring *ring, void *priv, return 0; } else { skb_frag_t *frag = (skb_frag_t *)priv; + struct page *page = skb_frag_page(frag); size = skb_frag_size(frag); if (!size) return 0; - dma = skb_frag_dma_map(dev, frag, 0, size, DMA_TO_DEVICE); + if (skb_frag_is_pp(frag) && page->pp->p.dev == dev) { + dma = page_pool_get_dma_addr(page) + skb_frag_off(frag); + type = DESC_TYPE_PP_FRAG; + } else { + dma = skb_frag_dma_map(dev, frag, 0, size, DMA_TO_DEVICE); + } } if (unlikely(dma_mapping_error(dev, dma))) { @@ -4525,7 +4531,7 @@ static int hns3_nic_init_vector_data(struct hns3_nic_priv *priv) ret = hns3_get_vector_ring_chain(tqp_vector, &vector_ring_chain); if (ret) - goto map_ring_fail; + return ret; ret = h->ae_algo->ops->map_ring_to_vector(h, tqp_vector->vector_irq, &vector_ring_chain); @@ -4533,19 +4539,10 @@ static int hns3_nic_init_vector_data(struct hns3_nic_priv *priv) hns3_free_vector_ring_chain(tqp_vector, &vector_ring_chain); if (ret) - goto map_ring_fail; - - netif_napi_add(priv->netdev, &tqp_vector->napi, - hns3_nic_common_poll, NAPI_POLL_WEIGHT); + return ret; } return 0; - -map_ring_fail: - while (i--) - netif_napi_del(&priv->tqp_vector[i].napi); - - return ret; } static void hns3_nic_init_coal_cfg(struct hns3_nic_priv *priv) @@ -4754,7 +4751,7 @@ static void hns3_alloc_page_pool(struct hns3_enet_ring *ring) (PAGE_SIZE << hns3_page_order(ring)), .nid = dev_to_node(ring_to_dev(ring)), .dev = ring_to_dev(ring), - .dma_dir = DMA_FROM_DEVICE, + .dma_dir = DMA_BIDIRECTIONAL, .offset = 0, .max_len = PAGE_SIZE << hns3_page_order(ring), }; @@ -4923,6 +4920,15 @@ int hns3_init_all_ring(struct hns3_nic_priv *priv) u64_stats_init(&priv->ring[i].syncp); } + for (i = 0; i < priv->vector_num; i++) { + struct hns3_enet_tqp_vector *tqp_vector; + + tqp_vector = &priv->tqp_vector[i]; + netif_recyclable_napi_add(priv->netdev, &tqp_vector->napi, + hns3_nic_common_poll, NAPI_POLL_WEIGHT, + tqp_vector->rx_group.ring->page_pool); + } + return 0; out_when_alloc_ring_memory: