From patchwork Wed Jan 6 14:42:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sieng Piaw Liew X-Patchwork-Id: 358013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AB44C433E9 for ; Wed, 6 Jan 2021 14:44:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B865C23110 for ; Wed, 6 Jan 2021 14:44:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726794AbhAFOoI (ORCPT ); Wed, 6 Jan 2021 09:44:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59262 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726730AbhAFOoH (ORCPT ); Wed, 6 Jan 2021 09:44:07 -0500 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EB67C06135B; Wed, 6 Jan 2021 06:43:22 -0800 (PST) Received: by mail-pj1-x1030.google.com with SMTP id iq13so1646304pjb.3; Wed, 06 Jan 2021 06:43:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=d5vcGavR4PydjqkNXvyu8THbVXAEosqI/1jZwqLL0vk=; b=GYu6qkg+JpoEsxTAWQhgpr3Al9D3h56YGDPI5tnln1e6CEoIMSxWiTvrAwoOahlIno Rxwn6l0HHnRVzDYaHF0yR2C9QB3E0v6rqMcqvZ+U6yhjdZSq8ACpf405nSTAvTuZ0GZW bgJFz123BjLlRKJR0E4sWmIoC8mK0Ms8EHC5u402sborc+V2LDFXHvjaQ+ouUz8e0e6G tDqqPxqJ07fhIOeFQLb28cEDDvReJa12zwi1evOKo9GN6eL4q2Trs9EnbcPIn+SVQJqJ eR++caeE+ExKsGdBEdtO0ihCDK7t6pb3QuDdR4OeNj+2GqDy431yhbBGtWJup0ENbjGQ ABFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=d5vcGavR4PydjqkNXvyu8THbVXAEosqI/1jZwqLL0vk=; b=oX2QRMqPjfOAmD/cqGlepjuIXgAT0LVidfxte9RWhMxReqFLlVGYNgmYX8eiv0z9Em Bmfx4KjxTW1v3ExTLwpiIEgF5TQi8Id7mXydmBenTW6VFmvcZh7rj5TH0WYhjHhJ9HKD dVGJ11nG83Pu4SktUenDA2QhQAKTsFWguu3jF/j+lIsqhSx0m/ycwhQTrr1NwobKSzrB AER8v6ckedhJtZEFVNIHBflOSve6kpqluik2sGYpu2Vu0ib+O0qH9WMfnJbJXXlHLRab SITXQNQql0HuqlDzjHEg3Yb364mj4rDRCsINwKJNIv1i9ajr8lnURk3isFRVXLmoyKtn etfw== X-Gm-Message-State: AOAM5307H4WSuxAFq6fELePZFjIb3qKK3Z+eiYpNYWjrt9uCYRqfk6/H TH60TYW2YveBm85iZOe7PbM= X-Google-Smtp-Source: ABdhPJw4i1kMKdzTxsDJaO0rPt7U1lhItGo5ZPMgO7u3ZT71armudO+xZGvfVINDdSuKY2AZqUR0cw== X-Received: by 2002:a17:902:654f:b029:da:347d:7af3 with SMTP id d15-20020a170902654fb02900da347d7af3mr4873136pln.18.1609944201986; Wed, 06 Jan 2021 06:43:21 -0800 (PST) Received: from DESKTOP-8REGVGF.localdomain ([124.13.157.5]) by smtp.gmail.com with ESMTPSA id h8sm3076774pjc.2.2021.01.06.06.43.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jan 2021 06:43:21 -0800 (PST) From: Sieng Piaw Liew To: Florian Fainelli Cc: bcm-kernel-feedback-list@broadcom.com, Sieng Piaw Liew , netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v3 5/7] bcm63xx_enet: consolidate rx SKB ring cleanup code Date: Wed, 6 Jan 2021 22:42:06 +0800 Message-Id: <20210106144208.1935-6-liew.s.piaw@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210106144208.1935-1-liew.s.piaw@gmail.com> References: <20210106144208.1935-1-liew.s.piaw@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The rx SKB ring use the same code for cleanup at various points. Combine them into a function to reduce lines of code. Signed-off-by: Sieng Piaw Liew --- drivers/net/ethernet/broadcom/bcm63xx_enet.c | 72 ++++++-------------- 1 file changed, 22 insertions(+), 50 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c index 96d56c3e2cc9..e34b05b10e43 100644 --- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c +++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c @@ -860,6 +860,24 @@ static void bcm_enet_adjust_link(struct net_device *dev) priv->pause_tx ? "tx" : "off"); } +static void bcm_enet_free_rx_skb_ring(struct device *kdev, struct bcm_enet_priv *priv) +{ + int i; + + for (i = 0; i < priv->rx_ring_size; i++) { + struct bcm_enet_desc *desc; + + if (!priv->rx_skb[i]) + continue; + + desc = &priv->rx_desc_cpu[i]; + dma_unmap_single(kdev, desc->address, priv->rx_skb_size, + DMA_FROM_DEVICE); + kfree_skb(priv->rx_skb[i]); + } + kfree(priv->rx_skb); +} + /* * open callback, allocate dma rings & buffers and start rx operation */ @@ -1084,18 +1102,7 @@ static int bcm_enet_open(struct net_device *dev) return 0; out: - for (i = 0; i < priv->rx_ring_size; i++) { - struct bcm_enet_desc *desc; - - if (!priv->rx_skb[i]) - continue; - - desc = &priv->rx_desc_cpu[i]; - dma_unmap_single(kdev, desc->address, priv->rx_skb_size, - DMA_FROM_DEVICE); - kfree_skb(priv->rx_skb[i]); - } - kfree(priv->rx_skb); + bcm_enet_free_rx_skb_ring(kdev, priv); out_free_tx_skb: kfree(priv->tx_skb); @@ -1174,7 +1181,6 @@ static int bcm_enet_stop(struct net_device *dev) { struct bcm_enet_priv *priv; struct device *kdev; - int i; priv = netdev_priv(dev); kdev = &priv->pdev->dev; @@ -1203,20 +1209,9 @@ static int bcm_enet_stop(struct net_device *dev) bcm_enet_tx_reclaim(dev, 1); /* free the rx skb ring */ - for (i = 0; i < priv->rx_ring_size; i++) { - struct bcm_enet_desc *desc; - - if (!priv->rx_skb[i]) - continue; - - desc = &priv->rx_desc_cpu[i]; - dma_unmap_single(kdev, desc->address, priv->rx_skb_size, - DMA_FROM_DEVICE); - kfree_skb(priv->rx_skb[i]); - } + bcm_enet_free_rx_skb_ring(kdev, priv); /* free remaining allocated memory */ - kfree(priv->rx_skb); kfree(priv->tx_skb); dma_free_coherent(kdev, priv->rx_desc_alloc_size, priv->rx_desc_cpu, priv->rx_desc_dma); @@ -2303,18 +2298,7 @@ static int bcm_enetsw_open(struct net_device *dev) return 0; out: - for (i = 0; i < priv->rx_ring_size; i++) { - struct bcm_enet_desc *desc; - - if (!priv->rx_skb[i]) - continue; - - desc = &priv->rx_desc_cpu[i]; - dma_unmap_single(kdev, desc->address, priv->rx_skb_size, - DMA_FROM_DEVICE); - kfree_skb(priv->rx_skb[i]); - } - kfree(priv->rx_skb); + bcm_enet_free_rx_skb_ring(kdev, priv); out_free_tx_skb: kfree(priv->tx_skb); @@ -2343,7 +2327,6 @@ static int bcm_enetsw_stop(struct net_device *dev) { struct bcm_enet_priv *priv; struct device *kdev; - int i; priv = netdev_priv(dev); kdev = &priv->pdev->dev; @@ -2366,20 +2349,9 @@ static int bcm_enetsw_stop(struct net_device *dev) bcm_enet_tx_reclaim(dev, 1); /* free the rx skb ring */ - for (i = 0; i < priv->rx_ring_size; i++) { - struct bcm_enet_desc *desc; - - if (!priv->rx_skb[i]) - continue; - - desc = &priv->rx_desc_cpu[i]; - dma_unmap_single(kdev, desc->address, priv->rx_skb_size, - DMA_FROM_DEVICE); - kfree_skb(priv->rx_skb[i]); - } + bcm_enet_free_rx_skb_ring(kdev, priv); /* free remaining allocated memory */ - kfree(priv->rx_skb); kfree(priv->tx_skb); dma_free_coherent(kdev, priv->rx_desc_alloc_size, priv->rx_desc_cpu, priv->rx_desc_dma); From patchwork Wed Jan 6 14:42:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sieng Piaw Liew X-Patchwork-Id: 358012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 848EBC43332 for ; Wed, 6 Jan 2021 14:44:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 53EC723100 for ; Wed, 6 Jan 2021 14:44:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726925AbhAFOoS (ORCPT ); Wed, 6 Jan 2021 09:44:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726731AbhAFOoH (ORCPT ); Wed, 6 Jan 2021 09:44:07 -0500 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A855C06135C; Wed, 6 Jan 2021 06:43:25 -0800 (PST) Received: by mail-pf1-x42a.google.com with SMTP id x126so1823880pfc.7; Wed, 06 Jan 2021 06:43:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=XftEdDCTyWGddNDIbdQhc19SIuraUjREfl71qMF7tjk=; b=L/TbdpbLXo2XcNE+MK8cUiQi2xRaIy8RYbnbBwDSmv5m/3CakH4SR80jIBeyrUv+Fc zHr5lEQ2U+Nzd2WmqSyo1Cskbv5jZTRQbkhOZbJ/c1MF64DJcIrGZMWVQ7oAV2SObgmD beQVLz5HHpGK1qB7HrKNFluiH7Sq7bzu0fTlM81Df/w2xAJSMXwIJpCsBgN9SrbgRuG9 5LcqvC5xf3mbDmoh154SP1fdMZs12fVMLYdTs2Dn+fMjGZwYJ4bjvku+uvzgLw2Oa9zE FND4DNhneCQU8P1pE4njXMyFjzHidngl6ctUOeodBpHNt3BuKADnyfUwX3+jdfcaIWBh Vp8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=XftEdDCTyWGddNDIbdQhc19SIuraUjREfl71qMF7tjk=; b=G23YhwrDvOC0R1vicI1oQYXGhDI54hwsdqkNEG9aPaZeVIYGSVMMmJTx912zIFWEXX CQF+B/7z4FoFYRmXGnnwdtRaBHciycxrW8CymHeU6q6EhNLuzTFqBIHD0jiLu6kX9gt8 zy6z9G7uWcHGY7mpjqVJASiZMB3KC251EbnCyfJzDZIf4sa5l4mONxyzzn5tdiJ6wz7D WEh6zJu+vFeCIpNWmxzosownFE99XYnAfHcOsR/2Usu9cabe+TEJoazGbOkXTe7A06zt NB1hsjdBXgxq5jz3lB7frOup8z2ILUqrEfDfh7/njgPMbQzrR+UHaHiY7njgzEgByCJ2 eFdQ== X-Gm-Message-State: AOAM530SugYgIpK19pdQEz3OO5VzyyJvzbDYR8vAVeSgD+mNQGAmd9IP 87P2ECHxqwOmKnCDWM6JV3t2xoFRKTE= X-Google-Smtp-Source: ABdhPJyoyM3gNVTNy+Exywyho72xpwIy/5cwhBNnzWibG1ZCfUhQ/RpDbUkbzRUl+pn89MEP0dMPsw== X-Received: by 2002:a62:7a91:0:b029:19e:55db:9ddc with SMTP id v139-20020a627a910000b029019e55db9ddcmr4418635pfc.68.1609944204566; Wed, 06 Jan 2021 06:43:24 -0800 (PST) Received: from DESKTOP-8REGVGF.localdomain ([124.13.157.5]) by smtp.gmail.com with ESMTPSA id h8sm3076774pjc.2.2021.01.06.06.43.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jan 2021 06:43:24 -0800 (PST) From: Sieng Piaw Liew To: Florian Fainelli Cc: bcm-kernel-feedback-list@broadcom.com, Sieng Piaw Liew , netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v3 6/7] bcm63xx_enet: convert to build_skb Date: Wed, 6 Jan 2021 22:42:07 +0800 Message-Id: <20210106144208.1935-7-liew.s.piaw@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210106144208.1935-1-liew.s.piaw@gmail.com> References: <20210106144208.1935-1-liew.s.piaw@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org We can increase the efficiency of rx path by using buffers to receive packets then build SKBs around them just before passing into the network stack. In contrast, preallocating SKBs too early reduces CPU cache efficiency. Check if we're in NAPI context when refilling RX. Normally we're almost always running in NAPI context. Dispatch to napi_alloc_frag directly instead of relying on netdev_alloc_frag which does the same but with the overhead of local_bh_disable/enable. Tested on BCM6328 320 MHz and iperf3 -M 512 to measure packet/sec performance. Included netif_receive_skb_list and NET_IP_ALIGN optimizations. Before: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 49.9 MBytes 41.9 Mbits/sec 197 sender [ 4] 0.00-10.00 sec 49.3 MBytes 41.3 Mbits/sec receiver After: [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-30.00 sec 171 MBytes 47.8 Mbits/sec 272 sender [ 4] 0.00-30.00 sec 170 MBytes 47.6 Mbits/sec receiver Signed-off-by: Sieng Piaw Liew --- drivers/net/ethernet/broadcom/bcm63xx_enet.c | 111 ++++++++++--------- drivers/net/ethernet/broadcom/bcm63xx_enet.h | 14 ++- 2 files changed, 71 insertions(+), 54 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c index e34b05b10e43..c11491429ed2 100644 --- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c +++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c @@ -220,7 +220,7 @@ static void bcm_enet_mdio_write_mii(struct net_device *dev, int mii_id, /* * refill rx queue */ -static int bcm_enet_refill_rx(struct net_device *dev) +static int bcm_enet_refill_rx(struct net_device *dev, bool napi_mode) { struct bcm_enet_priv *priv; @@ -228,29 +228,29 @@ static int bcm_enet_refill_rx(struct net_device *dev) while (priv->rx_desc_count < priv->rx_ring_size) { struct bcm_enet_desc *desc; - struct sk_buff *skb; - dma_addr_t p; int desc_idx; u32 len_stat; desc_idx = priv->rx_dirty_desc; desc = &priv->rx_desc_cpu[desc_idx]; - if (!priv->rx_skb[desc_idx]) { - if (priv->enet_is_sw) - skb = netdev_alloc_skb_ip_align(dev, priv->rx_skb_size); + if (!priv->rx_buf[desc_idx]) { + void *buf; + + if (likely(napi_mode)) + buf = napi_alloc_frag(priv->rx_frag_size); else - skb = netdev_alloc_skb(dev, priv->rx_skb_size); - if (!skb) + buf = netdev_alloc_frag(priv->rx_frag_size); + if (unlikely(!buf)) break; - priv->rx_skb[desc_idx] = skb; - p = dma_map_single(&priv->pdev->dev, skb->data, - priv->rx_skb_size, - DMA_FROM_DEVICE); - desc->address = p; + priv->rx_buf[desc_idx] = buf; + desc->address = dma_map_single(&priv->pdev->dev, + buf + priv->rx_buf_offset, + priv->rx_buf_size, + DMA_FROM_DEVICE); } - len_stat = priv->rx_skb_size << DMADESC_LENGTH_SHIFT; + len_stat = priv->rx_buf_size << DMADESC_LENGTH_SHIFT; len_stat |= DMADESC_OWNER_MASK; if (priv->rx_dirty_desc == priv->rx_ring_size - 1) { len_stat |= (DMADESC_WRAP_MASK >> priv->dma_desc_shift); @@ -290,7 +290,7 @@ static void bcm_enet_refill_rx_timer(struct timer_list *t) struct net_device *dev = priv->net_dev; spin_lock(&priv->rx_lock); - bcm_enet_refill_rx(dev); + bcm_enet_refill_rx(dev, false); spin_unlock(&priv->rx_lock); } @@ -320,6 +320,7 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) int desc_idx; u32 len_stat; unsigned int len; + void *buf; desc_idx = priv->rx_curr_desc; desc = &priv->rx_desc_cpu[desc_idx]; @@ -365,16 +366,14 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) } /* valid packet */ - skb = priv->rx_skb[desc_idx]; + buf = priv->rx_buf[desc_idx]; len = (len_stat & DMADESC_LENGTH_MASK) >> DMADESC_LENGTH_SHIFT; /* don't include FCS */ len -= 4; if (len < copybreak) { - struct sk_buff *nskb; - - nskb = napi_alloc_skb(&priv->napi, len); - if (!nskb) { + skb = napi_alloc_skb(&priv->napi, len); + if (unlikely(!skb)) { /* forget packet, just rearm desc */ dev->stats.rx_dropped++; continue; @@ -382,14 +381,21 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) dma_sync_single_for_cpu(kdev, desc->address, len, DMA_FROM_DEVICE); - memcpy(nskb->data, skb->data, len); + memcpy(skb->data, buf + priv->rx_buf_offset, len); dma_sync_single_for_device(kdev, desc->address, len, DMA_FROM_DEVICE); - skb = nskb; } else { - dma_unmap_single(&priv->pdev->dev, desc->address, - priv->rx_skb_size, DMA_FROM_DEVICE); - priv->rx_skb[desc_idx] = NULL; + dma_unmap_single(kdev, desc->address, + priv->rx_buf_size, DMA_FROM_DEVICE); + priv->rx_buf[desc_idx] = NULL; + + skb = build_skb(buf, priv->rx_frag_size); + if (unlikely(!skb)) { + skb_free_frag(buf); + dev->stats.rx_dropped++; + continue; + } + skb_reserve(skb, priv->rx_buf_offset); } skb_put(skb, len); @@ -403,7 +409,7 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) netif_receive_skb_list(&rx_list); if (processed || !priv->rx_desc_count) { - bcm_enet_refill_rx(dev); + bcm_enet_refill_rx(dev, true); /* kick rx dma */ enet_dmac_writel(priv, priv->dma_chan_en_mask, @@ -860,22 +866,22 @@ static void bcm_enet_adjust_link(struct net_device *dev) priv->pause_tx ? "tx" : "off"); } -static void bcm_enet_free_rx_skb_ring(struct device *kdev, struct bcm_enet_priv *priv) +static void bcm_enet_free_rx_buf_ring(struct device *kdev, struct bcm_enet_priv *priv) { int i; for (i = 0; i < priv->rx_ring_size; i++) { struct bcm_enet_desc *desc; - if (!priv->rx_skb[i]) + if (!priv->rx_buf[i]) continue; desc = &priv->rx_desc_cpu[i]; - dma_unmap_single(kdev, desc->address, priv->rx_skb_size, + dma_unmap_single(kdev, desc->address, priv->rx_buf_size, DMA_FROM_DEVICE); - kfree_skb(priv->rx_skb[i]); + skb_free_frag(priv->rx_buf[i]); } - kfree(priv->rx_skb); + kfree(priv->rx_buf); } /* @@ -987,10 +993,10 @@ static int bcm_enet_open(struct net_device *dev) priv->tx_curr_desc = 0; spin_lock_init(&priv->tx_lock); - /* init & fill rx ring with skbs */ - priv->rx_skb = kcalloc(priv->rx_ring_size, sizeof(struct sk_buff *), + /* init & fill rx ring with buffers */ + priv->rx_buf = kcalloc(priv->rx_ring_size, sizeof(void *), GFP_KERNEL); - if (!priv->rx_skb) { + if (!priv->rx_buf) { ret = -ENOMEM; goto out_free_tx_skb; } @@ -1007,8 +1013,8 @@ static int bcm_enet_open(struct net_device *dev) enet_dmac_writel(priv, ENETDMA_BUFALLOC_FORCE_MASK | 0, ENETDMAC_BUFALLOC, priv->rx_chan); - if (bcm_enet_refill_rx(dev)) { - dev_err(kdev, "cannot allocate rx skb queue\n"); + if (bcm_enet_refill_rx(dev, false)) { + dev_err(kdev, "cannot allocate rx buffer queue\n"); ret = -ENOMEM; goto out; } @@ -1102,7 +1108,7 @@ static int bcm_enet_open(struct net_device *dev) return 0; out: - bcm_enet_free_rx_skb_ring(kdev, priv); + bcm_enet_free_rx_buf_ring(kdev, priv); out_free_tx_skb: kfree(priv->tx_skb); @@ -1208,8 +1214,8 @@ static int bcm_enet_stop(struct net_device *dev) /* force reclaim of all tx buffers */ bcm_enet_tx_reclaim(dev, 1); - /* free the rx skb ring */ - bcm_enet_free_rx_skb_ring(kdev, priv); + /* free the rx buffer ring */ + bcm_enet_free_rx_buf_ring(kdev, priv); /* free remaining allocated memory */ kfree(priv->tx_skb); @@ -1633,9 +1639,12 @@ static int bcm_enet_change_mtu(struct net_device *dev, int new_mtu) * align rx buffer size to dma burst len, account FCS since * it's appended */ - priv->rx_skb_size = ALIGN(actual_mtu + ETH_FCS_LEN, + priv->rx_buf_size = ALIGN(actual_mtu + ETH_FCS_LEN, priv->dma_maxburst * 4); + priv->rx_frag_size = SKB_DATA_ALIGN(priv->rx_buf_offset + priv->rx_buf_size) + + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + dev->mtu = new_mtu; return 0; } @@ -1720,6 +1729,7 @@ static int bcm_enet_probe(struct platform_device *pdev) priv->enet_is_sw = false; priv->dma_maxburst = BCMENET_DMA_MAXBURST; + priv->rx_buf_offset = NET_SKB_PAD; ret = bcm_enet_change_mtu(dev, dev->mtu); if (ret) @@ -2137,7 +2147,7 @@ static int bcm_enetsw_open(struct net_device *dev) priv->tx_skb = kcalloc(priv->tx_ring_size, sizeof(struct sk_buff *), GFP_KERNEL); if (!priv->tx_skb) { - dev_err(kdev, "cannot allocate rx skb queue\n"); + dev_err(kdev, "cannot allocate tx skb queue\n"); ret = -ENOMEM; goto out_free_tx_ring; } @@ -2147,11 +2157,11 @@ static int bcm_enetsw_open(struct net_device *dev) priv->tx_curr_desc = 0; spin_lock_init(&priv->tx_lock); - /* init & fill rx ring with skbs */ - priv->rx_skb = kcalloc(priv->rx_ring_size, sizeof(struct sk_buff *), + /* init & fill rx ring with buffers */ + priv->rx_buf = kcalloc(priv->rx_ring_size, sizeof(void *), GFP_KERNEL); - if (!priv->rx_skb) { - dev_err(kdev, "cannot allocate rx skb queue\n"); + if (!priv->rx_buf) { + dev_err(kdev, "cannot allocate rx buffer queue\n"); ret = -ENOMEM; goto out_free_tx_skb; } @@ -2198,8 +2208,8 @@ static int bcm_enetsw_open(struct net_device *dev) enet_dma_writel(priv, ENETDMA_BUFALLOC_FORCE_MASK | 0, ENETDMA_BUFALLOC_REG(priv->rx_chan)); - if (bcm_enet_refill_rx(dev)) { - dev_err(kdev, "cannot allocate rx skb queue\n"); + if (bcm_enet_refill_rx(dev, false)) { + dev_err(kdev, "cannot allocate rx buffer queue\n"); ret = -ENOMEM; goto out; } @@ -2298,7 +2308,7 @@ static int bcm_enetsw_open(struct net_device *dev) return 0; out: - bcm_enet_free_rx_skb_ring(kdev, priv); + bcm_enet_free_rx_buf_ring(kdev, priv); out_free_tx_skb: kfree(priv->tx_skb); @@ -2348,8 +2358,8 @@ static int bcm_enetsw_stop(struct net_device *dev) /* force reclaim of all tx buffers */ bcm_enet_tx_reclaim(dev, 1); - /* free the rx skb ring */ - bcm_enet_free_rx_skb_ring(kdev, priv); + /* free the rx buffer ring */ + bcm_enet_free_rx_buf_ring(kdev, priv); /* free remaining allocated memory */ kfree(priv->tx_skb); @@ -2648,6 +2658,7 @@ static int bcm_enetsw_probe(struct platform_device *pdev) priv->rx_ring_size = BCMENET_DEF_RX_DESC; priv->tx_ring_size = BCMENET_DEF_TX_DESC; priv->dma_maxburst = BCMENETSW_DMA_MAXBURST; + priv->rx_buf_offset = NET_SKB_PAD + NET_IP_ALIGN; pd = dev_get_platdata(&pdev->dev); if (pd) { diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.h b/drivers/net/ethernet/broadcom/bcm63xx_enet.h index 1d3c917eb830..78f1830fb3cb 100644 --- a/drivers/net/ethernet/broadcom/bcm63xx_enet.h +++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.h @@ -230,11 +230,17 @@ struct bcm_enet_priv { /* next dirty rx descriptor to refill */ int rx_dirty_desc; - /* size of allocated rx skbs */ - unsigned int rx_skb_size; + /* size of allocated rx buffers */ + unsigned int rx_buf_size; - /* list of skb given to hw for rx */ - struct sk_buff **rx_skb; + /* allocated rx buffer offset */ + unsigned int rx_buf_offset; + + /* size of allocated rx frag */ + unsigned int rx_frag_size; + + /* list of buffer given to hw for rx */ + void **rx_buf; /* used when rx skb allocation failed, so we defer rx queue * refill */ From patchwork Wed Jan 6 14:42:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sieng Piaw Liew X-Patchwork-Id: 358010 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 542F3C4332D for ; Wed, 6 Jan 2021 14:45:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2A1BD23100 for ; Wed, 6 Jan 2021 14:45:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727085AbhAFOoc (ORCPT ); Wed, 6 Jan 2021 09:44:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726527AbhAFOob (ORCPT ); Wed, 6 Jan 2021 09:44:31 -0500 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6234CC06134D; Wed, 6 Jan 2021 06:43:27 -0800 (PST) Received: by mail-pf1-x429.google.com with SMTP id m6so1824706pfm.6; Wed, 06 Jan 2021 06:43:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=vZbmDTb0LxwuuPGyUB5ba2z6ziLQcYOI0K/25hO2Dx0=; b=TMMOJGKDAEV2jGJkqOX+5Rt1Zi+fJkf/XKL7dYcMgkDy9ZFH+5JoGHKmqdej30REEv 7gcogGi33Cq2mYRFJowHB2lNDtzi6AbyPuvdWr1I8ToBfZ2Pu0hmbMMt123YY1l1tSGe d6P7HCiYRXtQx2NJ4YBsElIxTGMgAWe/aXwNx8+723/XJMjiIyFcoZzl/bkEamqugp8O P1h+bK2g5uujntQpHMjdBDI/wC+u9mAdgGRo5gscPbxnscfQp/3esjDmoOtbqRvBuO4b R+zmoIqKH5oeqZbXx0F3cfM4lIlGQfmh28HoZnArDWrGIupKwQCPeaAa25e1UtLnbWtO 8zwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vZbmDTb0LxwuuPGyUB5ba2z6ziLQcYOI0K/25hO2Dx0=; b=iaX8ja7WDaDsloWqNFb+5dMyUENnMUUfv62UhUUhiI+SNrqia5HOhgipA1HdXkC/Gz dIwn1LMUitSwaQa92Yr0qb19rhLZBX6y5cdo3xEwBmq467ElO1TWqGeZN5b+WIiXNDrQ fB24uU5AMuV72ocibTVt9q8+EQCNEATlyRH/+oDpDOkPA40T05Kt8DSg8IjxgKNokkmf NwXtqSIw2FDbRYihIW48asutFvyABuXQD16qafyDFtopIsbMqbJhh59eTY5U8PKBWB6F pviXwtJ8scQcdd5m1Hi+Z8Zsa3RCkap+YzbHA7n17CpU4EuzYKnCWZuP0AeTCe+9ivFk 5FQQ== X-Gm-Message-State: AOAM530xKKo6BSCrAcIDvXXxpspoFdiCA/wg/3BX8IiyLxOt2gZDBOL8 pT5woPJtGmQVIbQo4PVtZMM7bXMVzOk= X-Google-Smtp-Source: ABdhPJxEchh+BCJzCpA0Qbvipmo23B3PLsu7S5X1UJRT/vlffhz3vdgqLGZKUJwZAIotOpcUp1qfgw== X-Received: by 2002:aa7:81d6:0:b029:19e:2987:7465 with SMTP id c22-20020aa781d60000b029019e29877465mr4204058pfn.29.1609944206977; Wed, 06 Jan 2021 06:43:26 -0800 (PST) Received: from DESKTOP-8REGVGF.localdomain ([124.13.157.5]) by smtp.gmail.com with ESMTPSA id h8sm3076774pjc.2.2021.01.06.06.43.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jan 2021 06:43:26 -0800 (PST) From: Sieng Piaw Liew To: Florian Fainelli Cc: bcm-kernel-feedback-list@broadcom.com, Sieng Piaw Liew , netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v3 7/7] bcm63xx_enet: improve rx loop Date: Wed, 6 Jan 2021 22:42:08 +0800 Message-Id: <20210106144208.1935-8-liew.s.piaw@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210106144208.1935-1-liew.s.piaw@gmail.com> References: <20210106144208.1935-1-liew.s.piaw@gmail.com> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Use existing rx processed count to track against budget, thereby making budget decrement operation redundant. rx_desc_count can be calculated outside the rx loop, making the loop a bit smaller. Signed-off-by: Sieng Piaw Liew --- drivers/net/ethernet/broadcom/bcm63xx_enet.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c index c11491429ed2..fd8767213165 100644 --- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c +++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c @@ -339,7 +339,6 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) priv->rx_curr_desc++; if (priv->rx_curr_desc == priv->rx_ring_size) priv->rx_curr_desc = 0; - priv->rx_desc_count--; /* if the packet does not have start of packet _and_ * end of packet flag set, then just recycle it */ @@ -404,9 +403,10 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget) dev->stats.rx_bytes += len; list_add_tail(&skb->list, &rx_list); - } while (--budget > 0); + } while (processed < budget); netif_receive_skb_list(&rx_list); + priv->rx_desc_count -= processed; if (processed || !priv->rx_desc_count) { bcm_enet_refill_rx(dev, true);