From patchwork Wed Nov 18 23:20:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 327809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73E13C5519F for ; Wed, 18 Nov 2020 23:20:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 00755246EB for ; Wed, 18 Nov 2020 23:20:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Yvd3B2dL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727019AbgKRXUY (ORCPT ); Wed, 18 Nov 2020 18:20:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726300AbgKRXUX (ORCPT ); Wed, 18 Nov 2020 18:20:23 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B210C0613D4 for ; Wed, 18 Nov 2020 15:20:23 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id e19so4841629ybc.5 for ; Wed, 18 Nov 2020 15:20:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=xv5uQDXnIIIKLmsmXVl9DAgBhLD4eGg2gz2rkBkZNis=; b=Yvd3B2dL4GB3NI+lAlwz2Mlfhq/WID7FNLNPZp9nBppkQSMAJSP0bTHV+gk044v6R+ QlLXQPN9uIU67Xk8tiF9oBVDOMSF0hIvYCT89vtZ4GncVtx9JbIgd3ZJ1NH3TXmn4oQa GBV8/jwreG+jHFpkHHF/FKmFQNWigbNeI/xlZefxhH8DZuZcKBNtCB1tuRsquTXM6O7c 2BmLkkGVzC7RJVlhY2XEJuWEm2lDNtnQ31t0UpqbCfai31ie9/wDPrSw8KXWGinjTk4l go1esYyIFUsCNVJN3/KdnDN50TPNcYibzeEXREjS0vHBO/qgXpvjFV6xS7iVkTxbnsKJ qJjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xv5uQDXnIIIKLmsmXVl9DAgBhLD4eGg2gz2rkBkZNis=; b=UsrkXLM5O0IsETKESWykaojqgYZAdytZWynSrsY+S2o53VjNAOVS/ZKNUURgOaxq6q SMZ/2YvUqU95BLeLi+hkqt5eyf7AA2U6kIdyFePMrkwb5P+ZuoJWnoo+cUv7hMoqzB+N PLbTi4ArcVzJsKFJ/1Fvdw0UhP9Ov1zQTtDBU6L6O73rInWt4Ez7J9lWZ5MHTAV5keI5 tvDRgU9kUin7HV8D8j5vE/ETFAL7OQmmquITefSGGvarFVkkXTHBBbBX+ATz46SMGRJo Ve0FIif4q+fco/ON69BYuyOWV/Al+oSI+MBPNCqxa5PLTuVEOc9ofjFdBvlNsE7DTqZ2 MMCw== X-Gm-Message-State: AOAM5306hx3bFttHwvQZpPZR1VrG0ZyowxI6sx5Y0lhfuVTc6ufuN4ec jTokv/ECUhlumBoF6WNhij+eMlC0hq+JX1Uv/zCqVTPZ+iKPfEaej4qsTObVuzzdkf5k9OpUSoY iqVchfXgHHXgPeEAilE5ikGd6LYVICYjvX/qBqirUhnNI9uPQN1eNOU/j8tPpwKSKVs4//hVN X-Google-Smtp-Source: ABdhPJxl7j2RUCmOgYAjJTP+VsQ47VA+x1++HGONHMRvyvGCumxw0ru1U4PBJy0Z7dY4mE+nStfCxfXkjHB9XiDK Sender: "awogbemila via sendgmr" X-Received: from awogbemila.sea.corp.google.com ([2620:15c:100:202:1ea0:b8ff:fe73:6cc0]) (user=awogbemila job=sendgmr) by 2002:a25:2e41:: with SMTP id b1mr8734819ybn.384.1605741622561; Wed, 18 Nov 2020 15:20:22 -0800 (PST) Date: Wed, 18 Nov 2020 15:20:12 -0800 In-Reply-To: <20201118232014.2910642-1-awogbemila@google.com> Message-Id: <20201118232014.2910642-3-awogbemila@google.com> Mime-Version: 1.0 References: <20201118232014.2910642-1-awogbemila@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [PATCH net-next v7 2/4] gve: Add support for raw addressing to the rx path From: David Awogbemila To: netdev@vger.kernel.org Cc: Catherine Sullivan , Yangchun Fu , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Catherine Sullivan Add support to use raw dma addresses in the rx path. Due to this new support we can alloc a new buffer instead of making a copy. RX buffers are handed to the networking stack and are re-allocated as needed, avoiding the need to use skb_copy_to_linear_data() as in "qpl" mode. Reviewed-by: Yangchun Fu Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 12 +- drivers/net/ethernet/google/gve/gve_adminq.c | 14 +- drivers/net/ethernet/google/gve/gve_desc.h | 11 +- drivers/net/ethernet/google/gve/gve_main.c | 2 +- drivers/net/ethernet/google/gve/gve_rx.c | 206 ++++++++++++++----- 5 files changed, 183 insertions(+), 62 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 782e2791ee11..d8bba0ba34e3 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -38,6 +38,8 @@ #define NIC_TX_STATS_REPORT_NUM 0 #define NIC_RX_STATS_REPORT_NUM 4 +#define GVE_DATA_SLOT_ADDR_PAGE_MASK (~(PAGE_SIZE - 1)) + /* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */ struct gve_rx_desc_queue { struct gve_rx_desc *desc_ring; /* the descriptor ring */ @@ -49,7 +51,7 @@ struct gve_rx_desc_queue { struct gve_rx_slot_page_info { struct page *page; void *page_address; - u32 page_offset; /* offset to write to in page */ + u8 page_offset; /* flipped to second half? */ }; /* A list of pages registered with the device during setup and used by a queue @@ -64,10 +66,11 @@ struct gve_queue_page_list { /* Each slot in the data ring has a 1:1 mapping to a slot in the desc ring */ struct gve_rx_data_queue { - struct gve_rx_data_slot *data_ring; /* read by NIC */ + union gve_rx_data_slot *data_ring; /* read by NIC */ dma_addr_t data_bus; /* dma mapping of the slots */ struct gve_rx_slot_page_info *page_info; /* page info of the buffers */ struct gve_queue_page_list *qpl; /* qpl assigned to this queue */ + u8 raw_addressing; /* use raw_addressing? */ }; struct gve_priv; @@ -82,6 +85,7 @@ struct gve_rx_ring { u32 cnt; /* free-running total number of completed packets */ u32 fill_cnt; /* free-running total number of descs and buffs posted */ u32 mask; /* masks the cnt and fill_cnt to the size of the ring */ + u32 db_threshold; /* threshold for posting new buffs and descs */ u64 rx_copybreak_pkt; /* free-running count of copybreak packets */ u64 rx_copied_pkt; /* free-running total number of copied packets */ u64 rx_skb_alloc_fail; /* free-running count of skb alloc fails */ @@ -194,7 +198,7 @@ struct gve_priv { u16 tx_desc_cnt; /* num desc per ring */ u16 rx_desc_cnt; /* num desc per ring */ u16 tx_pages_per_qpl; /* tx buffer length */ - u16 rx_pages_per_qpl; /* rx buffer length */ + u16 rx_data_slot_cnt; /* rx buffer length */ u64 max_registered_pages; u64 num_registered_pages; /* num pages registered with NIC */ u32 rx_copybreak; /* copy packets smaller than this */ @@ -444,7 +448,7 @@ static inline u32 gve_num_tx_qpls(struct gve_priv *priv) */ static inline u32 gve_num_rx_qpls(struct gve_priv *priv) { - return priv->rx_cfg.num_queues; + return priv->raw_addressing ? 0 : priv->rx_cfg.num_queues; } /* Returns a pointer to the next available tx qpl in the list of qpls diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 1e2d407cb9d2..738dc7f9dd9c 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -408,8 +408,10 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) { struct gve_rx_ring *rx = &priv->rx[queue_index]; union gve_adminq_command cmd; + u32 qpl_id; int err; + qpl_id = priv->raw_addressing ? GVE_RAW_ADDRESSING_QPL_ID : rx->data.qpl->id; memset(&cmd, 0, sizeof(cmd)); cmd.opcode = cpu_to_be32(GVE_ADMINQ_CREATE_RX_QUEUE); cmd.create_rx_queue = (struct gve_adminq_create_rx_queue) { @@ -420,7 +422,7 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) .queue_resources_addr = cpu_to_be64(rx->q_resources_bus), .rx_desc_ring_addr = cpu_to_be64(rx->desc.bus), .rx_data_ring_addr = cpu_to_be64(rx->data.data_bus), - .queue_page_list_id = cpu_to_be32(rx->data.qpl->id), + .queue_page_list_id = cpu_to_be32(qpl_id), }; err = gve_adminq_issue_cmd(priv, &cmd); @@ -565,11 +567,11 @@ int gve_adminq_describe_device(struct gve_priv *priv) mac = descriptor->mac; dev_info(&priv->pdev->dev, "MAC addr: %pM\n", mac); priv->tx_pages_per_qpl = be16_to_cpu(descriptor->tx_pages_per_qpl); - priv->rx_pages_per_qpl = be16_to_cpu(descriptor->rx_pages_per_qpl); - if (priv->rx_pages_per_qpl < priv->rx_desc_cnt) { - dev_err(&priv->pdev->dev, "rx_pages_per_qpl cannot be smaller than rx_desc_cnt, setting rx_desc_cnt down to %d.\n", - priv->rx_pages_per_qpl); - priv->rx_desc_cnt = priv->rx_pages_per_qpl; + priv->rx_data_slot_cnt = be16_to_cpu(descriptor->rx_pages_per_qpl); + if (priv->rx_data_slot_cnt < priv->rx_desc_cnt) { + dev_err(&priv->pdev->dev, "rx_data_slot_cnt cannot be smaller than rx_desc_cnt, setting rx_desc_cnt down to %d.\n", + priv->rx_data_slot_cnt); + priv->rx_desc_cnt = priv->rx_data_slot_cnt; } priv->default_num_queues = be16_to_cpu(descriptor->default_num_queues); dev_opt = (void *)(descriptor + 1); diff --git a/drivers/net/ethernet/google/gve/gve_desc.h b/drivers/net/ethernet/google/gve/gve_desc.h index 54779871d52e..a1c0aaa60139 100644 --- a/drivers/net/ethernet/google/gve/gve_desc.h +++ b/drivers/net/ethernet/google/gve/gve_desc.h @@ -72,12 +72,15 @@ struct gve_rx_desc { } __packed; static_assert(sizeof(struct gve_rx_desc) == 64); -/* As with the Tx ring format, the qpl_offset entries below are offsets into an - * ordered list of registered pages. +/* If the device supports raw dma addressing then the addr in data slot is + * the dma address of the buffer. + * If the device only supports registered segments then the addr is a byte + * offset into the registered segment (an ordered list of pages) where the + * buffer is. */ -struct gve_rx_data_slot { - /* byte offset into the rx registered segment of this slot */ +union gve_rx_data_slot { __be64 qpl_offset; + __be64 addr; }; /* GVE Recive Packet Descriptor Seq No */ diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 70685c10db0e..b7dae4f7b2fd 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -694,7 +694,7 @@ static int gve_alloc_qpls(struct gve_priv *priv) } for (; i < num_qpls; i++) { err = gve_alloc_queue_page_list(priv, i, - priv->rx_pages_per_qpl); + priv->rx_data_slot_cnt); if (err) goto free_qpls; } diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 008fa897a3e6..ff395a6564f8 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -16,12 +16,39 @@ static void gve_rx_remove_from_block(struct gve_priv *priv, int queue_idx) block->rx = NULL; } +static void gve_rx_free_buffer(struct device *dev, + struct gve_rx_slot_page_info *page_info, + union gve_rx_data_slot *data_slot) +{ + dma_addr_t dma = (dma_addr_t)(be64_to_cpu(data_slot->addr) & + GVE_DATA_SLOT_ADDR_PAGE_MASK); + + gve_free_page(dev, page_info->page, dma, DMA_FROM_DEVICE); +} + +static void gve_rx_unfill_pages(struct gve_priv *priv, struct gve_rx_ring *rx) +{ + if (rx->data.raw_addressing) { + u32 slots = rx->mask + 1; + int i; + + for (i = 0; i < slots; i++) + gve_rx_free_buffer(&priv->pdev->dev, &rx->data.page_info[i], + &rx->data.data_ring[i]); + } else { + gve_unassign_qpl(priv, rx->data.qpl->id); + rx->data.qpl = NULL; + } + kvfree(rx->data.page_info); + rx->data.page_info = NULL; +} + static void gve_rx_free_ring(struct gve_priv *priv, int idx) { struct gve_rx_ring *rx = &priv->rx[idx]; struct device *dev = &priv->pdev->dev; + u32 slots = rx->mask + 1; size_t bytes; - u32 slots; gve_rx_remove_from_block(priv, idx); @@ -33,11 +60,8 @@ static void gve_rx_free_ring(struct gve_priv *priv, int idx) rx->q_resources, rx->q_resources_bus); rx->q_resources = NULL; - gve_unassign_qpl(priv, rx->data.qpl->id); - rx->data.qpl = NULL; - kvfree(rx->data.page_info); + gve_rx_unfill_pages(priv, rx); - slots = rx->mask + 1; bytes = sizeof(*rx->data.data_ring) * slots; dma_free_coherent(dev, bytes, rx->data.data_ring, rx->data.data_bus); @@ -46,19 +70,35 @@ static void gve_rx_free_ring(struct gve_priv *priv, int idx) } static void gve_setup_rx_buffer(struct gve_rx_slot_page_info *page_info, - struct gve_rx_data_slot *slot, - dma_addr_t addr, struct page *page) + dma_addr_t addr, struct page *page, __be64 *slot_addr) { page_info->page = page; page_info->page_offset = 0; page_info->page_address = page_address(page); - slot->qpl_offset = cpu_to_be64(addr); + *slot_addr = cpu_to_be64(addr); +} + +static int gve_rx_alloc_buffer(struct gve_priv *priv, struct device *dev, + struct gve_rx_slot_page_info *page_info, + union gve_rx_data_slot *data_slot) +{ + struct page *page; + dma_addr_t dma; + int err; + + err = gve_alloc_page(priv, dev, &page, &dma, DMA_FROM_DEVICE); + if (err) + return err; + + gve_setup_rx_buffer(page_info, dma, page, &data_slot->addr); + return 0; } static int gve_prefill_rx_pages(struct gve_rx_ring *rx) { struct gve_priv *priv = rx->gve; u32 slots; + int err; int i; /* Allocate one page per Rx queue slot. Each page is split into two @@ -71,17 +111,30 @@ static int gve_prefill_rx_pages(struct gve_rx_ring *rx) if (!rx->data.page_info) return -ENOMEM; - rx->data.qpl = gve_assign_rx_qpl(priv); - + if (!rx->data.raw_addressing) + rx->data.qpl = gve_assign_rx_qpl(priv); for (i = 0; i < slots; i++) { - struct page *page = rx->data.qpl->pages[i]; - dma_addr_t addr = i * PAGE_SIZE; + if (!rx->data.raw_addressing) { + struct page *page = rx->data.qpl->pages[i]; + dma_addr_t addr = i * PAGE_SIZE; - gve_setup_rx_buffer(&rx->data.page_info[i], - &rx->data.data_ring[i], addr, page); + gve_setup_rx_buffer(&rx->data.page_info[i], addr, page, + &rx->data.data_ring[i].qpl_offset); + continue; + } + err = gve_rx_alloc_buffer(priv, &priv->pdev->dev, &rx->data.page_info[i], + &rx->data.data_ring[i]); + if (err) + goto alloc_err; } return slots; +alloc_err: + while (i--) + gve_rx_free_buffer(&priv->pdev->dev, + &rx->data.page_info[i], + &rx->data.data_ring[i]); + return err; } static void gve_rx_add_to_block(struct gve_priv *priv, int queue_idx) @@ -110,8 +163,9 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) rx->gve = priv; rx->q_num = idx; - slots = priv->rx_pages_per_qpl; + slots = priv->rx_data_slot_cnt; rx->mask = slots - 1; + rx->data.raw_addressing = priv->raw_addressing; /* alloc rx data ring */ bytes = sizeof(*rx->data.data_ring) * slots; @@ -156,8 +210,8 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) err = -ENOMEM; goto abort_with_q_resources; } - rx->mask = slots - 1; rx->cnt = 0; + rx->db_threshold = priv->rx_desc_cnt / 2; rx->desc.seqno = 1; gve_rx_add_to_block(priv, idx); @@ -168,7 +222,7 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) rx->q_resources, rx->q_resources_bus); rx->q_resources = NULL; abort_filled: - kvfree(rx->data.page_info); + gve_rx_unfill_pages(priv, rx); abort_with_slots: bytes = sizeof(*rx->data.data_ring) * slots; dma_free_coherent(hdev, bytes, rx->data.data_ring, rx->data.data_bus); @@ -233,7 +287,7 @@ static struct sk_buff *gve_rx_copy(struct gve_rx_ring *rx, { struct sk_buff *skb = napi_alloc_skb(napi, len); void *va = page_info->page_address + GVE_RX_PAD + - page_info->page_offset; + (page_info->page_offset ? PAGE_SIZE / 2 : 0); if (unlikely(!skb)) return NULL; @@ -251,8 +305,7 @@ static struct sk_buff *gve_rx_copy(struct gve_rx_ring *rx, return skb; } -static struct sk_buff *gve_rx_add_frags(struct net_device *dev, - struct napi_struct *napi, +static struct sk_buff *gve_rx_add_frags(struct napi_struct *napi, struct gve_rx_slot_page_info *page_info, u16 len) { @@ -262,20 +315,19 @@ static struct sk_buff *gve_rx_add_frags(struct net_device *dev, return NULL; skb_add_rx_frag(skb, 0, page_info->page, - page_info->page_offset + + (page_info->page_offset ? PAGE_SIZE / 2 : 0) + GVE_RX_PAD, len, PAGE_SIZE / 2); return skb; } -static void gve_rx_flip_buff(struct gve_rx_slot_page_info *page_info, - struct gve_rx_data_slot *data_ring) +static void gve_rx_flip_buff(struct gve_rx_slot_page_info *page_info, __be64 *slot_addr) { - u64 addr = be64_to_cpu(data_ring->qpl_offset); + const __be64 offset = cpu_to_be64(PAGE_SIZE / 2); - page_info->page_offset ^= PAGE_SIZE / 2; - addr ^= PAGE_SIZE / 2; - data_ring->qpl_offset = cpu_to_be64(addr); + /* "flip" to other packet buffer on this page */ + page_info->page_offset ^= 0x1; + *(slot_addr) ^= offset; } static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, @@ -285,7 +337,9 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, struct gve_priv *priv = rx->gve; struct napi_struct *napi = &priv->ntfy_blocks[rx->ntfy_id].napi; struct net_device *dev = priv->dev; - struct sk_buff *skb; + union gve_rx_data_slot *data_slot; + struct sk_buff *skb = NULL; + dma_addr_t page_bus; int pagecount; u16 len; @@ -294,18 +348,18 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, u64_stats_update_begin(&rx->statss); rx->rx_desc_err_dropped_pkt++; u64_stats_update_end(&rx->statss); - return true; + return false; } len = be16_to_cpu(rx_desc->len) - GVE_RX_PAD; page_info = &rx->data.page_info[idx]; - dma_sync_single_for_cpu(&priv->pdev->dev, rx->data.qpl->page_buses[idx], - PAGE_SIZE, DMA_FROM_DEVICE); - /* gvnic can only receive into registered segments. If the buffer - * can't be recycled, our only choice is to copy the data out of - * it so that we can return it to the device. - */ + data_slot = &rx->data.data_ring[idx]; + page_bus = (rx->data.raw_addressing) ? + be64_to_cpu(data_slot->addr) & GVE_DATA_SLOT_ADDR_PAGE_MASK : + rx->data.qpl->page_buses[idx]; + dma_sync_single_for_cpu(&priv->pdev->dev, page_bus, + PAGE_SIZE, DMA_FROM_DEVICE); if (PAGE_SIZE == 4096) { if (len <= priv->rx_copybreak) { @@ -316,6 +370,10 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, u64_stats_update_end(&rx->statss); goto have_skb; } + if (rx->data.raw_addressing) { + skb = gve_rx_add_frags(napi, page_info, len); + goto have_skb; + } if (unlikely(!gve_can_recycle_pages(dev))) { skb = gve_rx_copy(rx, dev, napi, page_info, len); goto have_skb; @@ -326,17 +384,17 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, * the page fragment to a new SKB and pass it up the * stack. */ - skb = gve_rx_add_frags(dev, napi, page_info, len); + skb = gve_rx_add_frags(napi, page_info, len); if (!skb) { u64_stats_update_begin(&rx->statss); rx->rx_skb_alloc_fail++; u64_stats_update_end(&rx->statss); - return true; + return false; } /* Make sure the kernel stack can't release the page */ get_page(page_info->page); /* "flip" to other packet buffer on this page */ - gve_rx_flip_buff(page_info, &rx->data.data_ring[idx]); + gve_rx_flip_buff(page_info, &rx->data.data_ring[idx].qpl_offset); } else if (pagecount >= 2) { /* We have previously passed the other half of this * page up the stack, but it has not yet been freed. @@ -347,7 +405,10 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, return false; } } else { - skb = gve_rx_copy(rx, dev, napi, page_info, len); + if (rx->data.raw_addressing) + skb = gve_rx_add_frags(napi, page_info, len); + else + skb = gve_rx_copy(rx, dev, napi, page_info, len); } have_skb: @@ -358,7 +419,7 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, u64_stats_update_begin(&rx->statss); rx->rx_skb_alloc_fail++; u64_stats_update_end(&rx->statss); - return true; + return false; } if (likely(feat & NETIF_F_RXCSUM)) { @@ -399,19 +460,49 @@ static bool gve_rx_work_pending(struct gve_rx_ring *rx) return (GVE_SEQNO(flags_seq) == rx->desc.seqno); } +static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx) +{ + bool empty = rx->fill_cnt == rx->cnt; + u32 fill_cnt = rx->fill_cnt; + + while (empty || ((fill_cnt & rx->mask) != (rx->cnt & rx->mask))) { + struct gve_rx_slot_page_info *page_info; + struct device *dev = &priv->pdev->dev; + union gve_rx_data_slot *data_slot; + u32 idx = fill_cnt & rx->mask; + + page_info = &rx->data.page_info[idx]; + data_slot = &rx->data.data_ring[idx]; + gve_rx_free_buffer(dev, page_info, data_slot); + page_info->page = NULL; + if (gve_rx_alloc_buffer(priv, dev, page_info, data_slot)) { + u64_stats_update_begin(&rx->statss); + rx->rx_buf_alloc_fail++; + u64_stats_update_end(&rx->statss); + break; + } + empty = false; + fill_cnt++; + } + rx->fill_cnt = fill_cnt; + return true; +} + bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, netdev_features_t feat) { struct gve_priv *priv = rx->gve; + u32 work_done = 0, packets = 0; struct gve_rx_desc *desc; u32 cnt = rx->cnt; u32 idx = cnt & rx->mask; - u32 work_done = 0; u64 bytes = 0; desc = rx->desc.desc_ring + idx; while ((GVE_SEQNO(desc->flags_seq) == rx->desc.seqno) && work_done < budget) { + bool dropped; + netif_info(priv, rx_status, priv->dev, "[%d] idx=%d desc=%p desc->flags_seq=0x%x\n", rx->q_num, idx, desc, desc->flags_seq); @@ -419,9 +510,11 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, "[%d] seqno=%d rx->desc.seqno=%d\n", rx->q_num, GVE_SEQNO(desc->flags_seq), rx->desc.seqno); - bytes += be16_to_cpu(desc->len) - GVE_RX_PAD; - if (!gve_rx(rx, desc, feat, idx)) - gve_schedule_reset(priv); + dropped = !gve_rx(rx, desc, feat, idx); + if (!dropped) { + bytes += be16_to_cpu(desc->len) - GVE_RX_PAD; + packets++; + } cnt++; idx = cnt & rx->mask; desc = rx->desc.desc_ring + idx; @@ -429,15 +522,34 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, work_done++; } - if (!work_done) + if (!work_done && rx->fill_cnt - cnt > rx->db_threshold) return false; u64_stats_update_begin(&rx->statss); - rx->rpackets += work_done; + rx->rpackets += packets; rx->rbytes += bytes; u64_stats_update_end(&rx->statss); rx->cnt = cnt; - rx->fill_cnt += work_done; + + /* restock ring slots */ + if (!rx->data.raw_addressing) { + /* In QPL mode buffs are refilled as the desc are processed */ + rx->fill_cnt += work_done; + } else if (rx->fill_cnt - cnt <= rx->db_threshold) { + /* In raw addressing mode buffs are only refilled if the avail + * falls below a threshold. + */ + if (!gve_rx_refill_buffers(priv, rx)) + return false; + + /* If we were not able to completely refill buffers, we'll want + * to schedule this queue for work again to refill buffers. + */ + if (rx->fill_cnt - cnt <= rx->db_threshold) { + gve_rx_write_doorbell(priv, rx); + return true; + } + } gve_rx_write_doorbell(priv, rx); return gve_rx_work_pending(rx); From patchwork Wed Nov 18 23:20:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 327808 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3850C5519F for ; Wed, 18 Nov 2020 23:20:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 506BA246E0 for ; Wed, 18 Nov 2020 23:20:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wJXchbVO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727103AbgKRXU1 (ORCPT ); Wed, 18 Nov 2020 18:20:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726300AbgKRXU1 (ORCPT ); Wed, 18 Nov 2020 18:20:27 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D855EC0613D4 for ; Wed, 18 Nov 2020 15:20:26 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id o1so2867757qtp.7 for ; Wed, 18 Nov 2020 15:20:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=cdr0JH+nY/EbpinpvbtykTuoi/l5KmAEDYaWH2Haiq0=; b=wJXchbVOGWi2NEUTMJB+plYRVzZMfUdO+5XT2JUBKR53g2LYn09qliS7rTpZwiarE6 hBI3Hzf0rTHnRnrTvQG3ozukw4JvJLISf7cadyDGReoVS+tM8WKq++KnuxTYKWGia32A Esmd139M5/pOCkcwnbKD8LrM0Sia2CW/adNTcA4j4eAwZLKToFgO+iBFhwVIfzRkSH9R mmKXLhWwueyjW+ExeS0AKk3Z9e0gGGEJLsmswUsrXph1xkK6Q18Q+Xn5npakY7H23PD7 taUqUktk7R+kksS1iNDsSo8nyR9XHRJO55tX4KvGtsyQoPybTHSx+icu5Q2TVretXnCr 9qXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cdr0JH+nY/EbpinpvbtykTuoi/l5KmAEDYaWH2Haiq0=; b=RoQ1Dp2oJ2VzI9f5ByWjhyUSaX5HRJJy5Kv0RyEOPU5oBdZ3VC4S/YcwtQvme/esT0 dgJjmaORIqTXmwz5ur+Pj0zecH+x12ez51M1Z90UP5NVRXPB+m7+r/UAVzeHhGgUNwfv HaRoL8acBjaie63LV0bL4kvXlUgqgDRAUZ7vWQ9eiFDVezb/iVe8lAt6n7wmpQ1Y0KBu l9yVwLiIFKWmT7GUKDMFdxJhtqCm7MQCdxJ31TXi2Dy2bx2vyF3UvhRPYCNWzAzD4bby o5Z4vwBZWSw/ZbQaDaqwvcYZiqHkwWoH1bmFpCY7pEYiBy3UKrOjJ9xrscBSvcLLnXP5 fXZQ== X-Gm-Message-State: AOAM532xkfSLgDGwZaC1oYtaFcw+DJr1eFZ2IJoTLcwj8HYGawbk9u9N 5tJ1Sn168dKV9GJ+3xfIQLaeSbBemJ8l4H1tXbNVN2aNFfSKrpVztXpMBMecmrrn7v6e7Hx/wSI htt8gQIIfOcrQnm5D8v6ilKQLfr93dBlDAeVQQSouP73Mm540brFQhLv0pZOPEm8x9w+i6iGH X-Google-Smtp-Source: ABdhPJxcUQAkwd9AuPhf+WyUnJk9Y5F9+lbiRtc3DF9v9RBSHYeEtQq+ZegKwq9MfVFvVhRl8dYiBwWmax3bTH0H Sender: "awogbemila via sendgmr" X-Received: from awogbemila.sea.corp.google.com ([2620:15c:100:202:1ea0:b8ff:fe73:6cc0]) (user=awogbemila job=sendgmr) by 2002:ad4:4745:: with SMTP id c5mr7696671qvx.2.1605741625877; Wed, 18 Nov 2020 15:20:25 -0800 (PST) Date: Wed, 18 Nov 2020 15:20:14 -0800 In-Reply-To: <20201118232014.2910642-1-awogbemila@google.com> Message-Id: <20201118232014.2910642-5-awogbemila@google.com> Mime-Version: 1.0 References: <20201118232014.2910642-1-awogbemila@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [PATCH net-next v7 4/4] gve: Add support for raw addressing in the tx path From: David Awogbemila To: netdev@vger.kernel.org Cc: Catherine Sullivan , Yangchun Fu , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Catherine Sullivan During TX, skbs' data addresses are dma_map'ed and passed to the NIC. This means that the device can perform DMA directly from these addresses and the driver does not have to copy the buffer content into pre-allocated buffers/qpls (as in qpl mode). Reviewed-by: Yangchun Fu Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 15 +- drivers/net/ethernet/google/gve/gve_adminq.c | 4 +- drivers/net/ethernet/google/gve/gve_desc.h | 8 +- drivers/net/ethernet/google/gve/gve_tx.c | 209 +++++++++++++++---- 4 files changed, 189 insertions(+), 47 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 8aad4af2aa2b..8768f486b942 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -112,12 +112,20 @@ struct gve_tx_iovec { u32 iov_padding; /* padding associated with this segment */ }; +struct gve_tx_dma_buf { + DEFINE_DMA_UNMAP_ADDR(dma); + DEFINE_DMA_UNMAP_LEN(len); +}; + /* Tracks the memory in the fifo occupied by the skb. Mapped 1:1 to a desc * ring entry but only used for a pkt_desc not a seg_desc */ struct gve_tx_buffer_state { struct sk_buff *skb; /* skb for this pkt */ - struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */ + union { + struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */ + struct gve_tx_dma_buf buf; + }; }; /* A TX buffer - each queue has one */ @@ -140,13 +148,16 @@ struct gve_tx_ring { __be32 last_nic_done ____cacheline_aligned; /* NIC tail pointer */ u64 pkt_done; /* free-running - total packets completed */ u64 bytes_done; /* free-running - total bytes completed */ + u32 dropped_pkt; /* free-running - total packets dropped */ /* Cacheline 2 -- Read-mostly fields */ union gve_tx_desc *desc ____cacheline_aligned; struct gve_tx_buffer_state *info; /* Maps 1:1 to a desc */ struct netdev_queue *netdev_txq; struct gve_queue_resources *q_resources; /* head and tail pointer idx */ + struct device *dev; u32 mask; /* masks req and done down to queue size */ + u8 raw_addressing; /* use raw_addressing? */ /* Slow-path fields */ u32 q_num ____cacheline_aligned; /* queue idx */ @@ -442,7 +453,7 @@ static inline u32 gve_rx_idx_to_ntfy(struct gve_priv *priv, u32 queue_idx) */ static inline u32 gve_num_tx_qpls(struct gve_priv *priv) { - return priv->tx_cfg.num_queues; + return priv->raw_addressing ? 0 : priv->tx_cfg.num_queues; } /* Returns the number of rx queue page lists diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 738dc7f9dd9c..66c90eac3ecd 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -369,8 +369,10 @@ static int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_index) { struct gve_tx_ring *tx = &priv->tx[queue_index]; union gve_adminq_command cmd; + u32 qpl_id; int err; + qpl_id = priv->raw_addressing ? GVE_RAW_ADDRESSING_QPL_ID : tx->tx_fifo.qpl->id; memset(&cmd, 0, sizeof(cmd)); cmd.opcode = cpu_to_be32(GVE_ADMINQ_CREATE_TX_QUEUE); cmd.create_tx_queue = (struct gve_adminq_create_tx_queue) { @@ -379,7 +381,7 @@ static int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_index) .queue_resources_addr = cpu_to_be64(tx->q_resources_bus), .tx_ring_addr = cpu_to_be64(tx->bus), - .queue_page_list_id = cpu_to_be32(tx->tx_fifo.qpl->id), + .queue_page_list_id = cpu_to_be32(qpl_id), .ntfy_id = cpu_to_be32(tx->ntfy_id), }; diff --git a/drivers/net/ethernet/google/gve/gve_desc.h b/drivers/net/ethernet/google/gve/gve_desc.h index a1c0aaa60139..05ae6300e984 100644 --- a/drivers/net/ethernet/google/gve/gve_desc.h +++ b/drivers/net/ethernet/google/gve/gve_desc.h @@ -16,9 +16,11 @@ * Base addresses encoded in seg_addr are not assumed to be physical * addresses. The ring format assumes these come from some linear address * space. This could be physical memory, kernel virtual memory, user virtual - * memory. gVNIC uses lists of registered pages. Each queue is assumed - * to be associated with a single such linear address space to ensure a - * consistent meaning for seg_addrs posted to its rings. + * memory. + * If raw dma addressing is not supported then gVNIC uses lists of registered + * pages. Each queue is assumed to be associated with a single such linear + * address space to ensure a consistent meaning for seg_addrs posted to its + * rings. */ struct gve_tx_pkt_desc { diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index d0244feb0301..a0befaf54855 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -158,9 +158,11 @@ static void gve_tx_free_ring(struct gve_priv *priv, int idx) tx->q_resources, tx->q_resources_bus); tx->q_resources = NULL; - gve_tx_fifo_release(priv, &tx->tx_fifo); - gve_unassign_qpl(priv, tx->tx_fifo.qpl->id); - tx->tx_fifo.qpl = NULL; + if (!tx->raw_addressing) { + gve_tx_fifo_release(priv, &tx->tx_fifo); + gve_unassign_qpl(priv, tx->tx_fifo.qpl->id); + tx->tx_fifo.qpl = NULL; + } bytes = sizeof(*tx->desc) * slots; dma_free_coherent(hdev, bytes, tx->desc, tx->bus); @@ -206,11 +208,15 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) if (!tx->desc) goto abort_with_info; - tx->tx_fifo.qpl = gve_assign_tx_qpl(priv); + tx->raw_addressing = priv->raw_addressing; + tx->dev = &priv->pdev->dev; + if (!tx->raw_addressing) { + tx->tx_fifo.qpl = gve_assign_tx_qpl(priv); - /* map Tx FIFO */ - if (gve_tx_fifo_init(priv, &tx->tx_fifo)) - goto abort_with_desc; + /* map Tx FIFO */ + if (gve_tx_fifo_init(priv, &tx->tx_fifo)) + goto abort_with_desc; + } tx->q_resources = dma_alloc_coherent(hdev, @@ -228,7 +234,8 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) return 0; abort_with_fifo: - gve_tx_fifo_release(priv, &tx->tx_fifo); + if (!tx->raw_addressing) + gve_tx_fifo_release(priv, &tx->tx_fifo); abort_with_desc: dma_free_coherent(hdev, bytes, tx->desc, tx->bus); tx->desc = NULL; @@ -301,27 +308,47 @@ static inline int gve_skb_fifo_bytes_required(struct gve_tx_ring *tx, return bytes; } -/* The most descriptors we could need are 3 - 1 for the headers, 1 for - * the beginning of the payload at the end of the FIFO, and 1 if the - * payload wraps to the beginning of the FIFO. +/* The most descriptors we could need is MAX_SKB_FRAGS + 3 : 1 for each skb frag, + * +1 for the skb linear portion, +1 for when tcp hdr needs to be in separate descriptor, + * and +1 if the payload wraps to the beginning of the FIFO. */ -#define MAX_TX_DESC_NEEDED 3 +#define MAX_TX_DESC_NEEDED (MAX_SKB_FRAGS + 3) +static void gve_tx_unmap_buf(struct device *dev, struct gve_tx_buffer_state *info) +{ + if (info->skb) { + dma_unmap_single(dev, dma_unmap_addr(&info->buf, dma), + dma_unmap_len(&info->buf, len), + DMA_TO_DEVICE); + dma_unmap_len_set(&info->buf, len, 0); + } else { + dma_unmap_page(dev, dma_unmap_addr(&info->buf, dma), + dma_unmap_len(&info->buf, len), + DMA_TO_DEVICE); + dma_unmap_len_set(&info->buf, len, 0); + } +} /* Check if sufficient resources (descriptor ring space, FIFO space) are * available to transmit the given number of bytes. */ static inline bool gve_can_tx(struct gve_tx_ring *tx, int bytes_required) { - return (gve_tx_avail(tx) >= MAX_TX_DESC_NEEDED && - gve_tx_fifo_can_alloc(&tx->tx_fifo, bytes_required)); + bool can_alloc = true; + + if (!tx->raw_addressing) + can_alloc = gve_tx_fifo_can_alloc(&tx->tx_fifo, bytes_required); + + return (gve_tx_avail(tx) >= MAX_TX_DESC_NEEDED && can_alloc); } /* Stops the queue if the skb cannot be transmitted. */ static int gve_maybe_stop_tx(struct gve_tx_ring *tx, struct sk_buff *skb) { - int bytes_required; + int bytes_required = 0; + + if (!tx->raw_addressing) + bytes_required = gve_skb_fifo_bytes_required(tx, skb); - bytes_required = gve_skb_fifo_bytes_required(tx, skb); if (likely(gve_can_tx(tx, bytes_required))) return 0; @@ -390,22 +417,19 @@ static void gve_tx_fill_seg_desc(union gve_tx_desc *seg_desc, seg_desc->seg.seg_addr = cpu_to_be64(addr); } -static void gve_dma_sync_for_device(struct device *dev, dma_addr_t *page_buses, +static void gve_dma_sync_for_device(struct device *dev, + dma_addr_t *page_buses, u64 iov_offset, u64 iov_len) { u64 last_page = (iov_offset + iov_len - 1) / PAGE_SIZE; u64 first_page = iov_offset / PAGE_SIZE; - dma_addr_t dma; u64 page; - for (page = first_page; page <= last_page; page++) { - dma = page_buses[page]; - dma_sync_single_for_device(dev, dma, PAGE_SIZE, DMA_TO_DEVICE); - } + for (page = first_page; page <= last_page; page++) + dma_sync_single_for_device(dev, page_buses[page], PAGE_SIZE, DMA_TO_DEVICE); } -static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb, - struct device *dev) +static int gve_tx_add_skb_copy(struct gve_priv *priv, struct gve_tx_ring *tx, struct sk_buff *skb) { int pad_bytes, hlen, hdr_nfrags, payload_nfrags, l4_hdr_offset; union gve_tx_desc *pkt_desc, *seg_desc; @@ -429,7 +453,7 @@ static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb, hlen = is_gso ? l4_hdr_offset + tcp_hdrlen(skb) : skb_headlen(skb); - info->skb = skb; + info->skb = skb; /* We don't want to split the header, so if necessary, pad to the end * of the fifo and then put the header at the beginning of the fifo. */ @@ -447,7 +471,7 @@ static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb, skb_copy_bits(skb, 0, tx->tx_fifo.base + info->iov[hdr_nfrags - 1].iov_offset, hlen); - gve_dma_sync_for_device(dev, tx->tx_fifo.qpl->page_buses, + gve_dma_sync_for_device(&priv->pdev->dev, tx->tx_fifo.qpl->page_buses, info->iov[hdr_nfrags - 1].iov_offset, info->iov[hdr_nfrags - 1].iov_len); copy_offset = hlen; @@ -463,7 +487,7 @@ static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb, skb_copy_bits(skb, copy_offset, tx->tx_fifo.base + info->iov[i].iov_offset, info->iov[i].iov_len); - gve_dma_sync_for_device(dev, tx->tx_fifo.qpl->page_buses, + gve_dma_sync_for_device(&priv->pdev->dev, tx->tx_fifo.qpl->page_buses, info->iov[i].iov_offset, info->iov[i].iov_len); copy_offset += info->iov[i].iov_len; @@ -472,6 +496,96 @@ static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb, return 1 + payload_nfrags; } +static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx, + struct sk_buff *skb) +{ + const struct skb_shared_info *shinfo = skb_shinfo(skb); + int hlen, payload_nfrags, l4_hdr_offset; + union gve_tx_desc *pkt_desc, *seg_desc; + struct gve_tx_buffer_state *info; + bool is_gso = skb_is_gso(skb); + u32 idx = tx->req & tx->mask; + struct gve_tx_dma_buf *buf; + u64 addr; + u32 len; + int i; + + info = &tx->info[idx]; + pkt_desc = &tx->desc[idx]; + + l4_hdr_offset = skb_checksum_start_offset(skb); + /* If the skb is gso, then we want only up to the tcp header in the first segment + * to efficiently replicate on each segment otherwise we want the linear portion + * of the skb (which will contain the checksum because skb->csum_start and + * skb->csum_offset are given relative to skb->head) in the first segment. + */ + hlen = is_gso ? l4_hdr_offset + tcp_hdrlen(skb) : skb_headlen(skb); + len = skb_headlen(skb); + + info->skb = skb; + + addr = dma_map_single(tx->dev, skb->data, len, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(tx->dev, addr))) { + rtnl_lock(); + priv->dma_mapping_error++; + rtnl_unlock(); + goto drop; + } + buf = &info->buf; + dma_unmap_len_set(buf, len, len); + dma_unmap_addr_set(buf, dma, addr); + + payload_nfrags = shinfo->nr_frags; + if (hlen < len) { + /* For gso the rest of the linear portion of the skb needs to + * be in its own descriptor. + */ + payload_nfrags++; + gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset, + 1 + payload_nfrags, hlen, addr); + + len -= hlen; + addr += hlen; + idx = (tx->req + 1) & tx->mask; + seg_desc = &tx->desc[idx]; + gve_tx_fill_seg_desc(seg_desc, skb, is_gso, len, addr); + } else { + gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset, + 1 + payload_nfrags, hlen, addr); + } + + for (i = 0; i < shinfo->nr_frags; i++) { + const skb_frag_t *frag = &shinfo->frags[i]; + + idx = (idx + 1) & tx->mask; + seg_desc = &tx->desc[idx]; + len = skb_frag_size(frag); + addr = skb_frag_dma_map(tx->dev, frag, 0, len, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(tx->dev, addr))) { + priv->dma_mapping_error++; + goto unmap_drop; + } + buf = &tx->info[idx].buf; + tx->info[idx].skb = NULL; + dma_unmap_len_set(buf, len, len); + dma_unmap_addr_set(buf, dma, addr); + + gve_tx_fill_seg_desc(seg_desc, skb, is_gso, len, addr); + } + + return 1 + payload_nfrags; + +unmap_drop: + i += (payload_nfrags == shinfo->nr_frags ? 1 : 2); + while (i--) { + idx--; + gve_tx_unmap_buf(tx->dev, &tx->info[idx & tx->mask]); + } +drop: + tx->dropped_pkt++; + return 0; +} + netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev) { struct gve_priv *priv = netdev_priv(dev); @@ -490,18 +604,26 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev) gve_tx_put_doorbell(priv, tx->q_resources, tx->req); return NETDEV_TX_BUSY; } - nsegs = gve_tx_add_skb(tx, skb, &priv->pdev->dev); - - netdev_tx_sent_queue(tx->netdev_txq, skb->len); - skb_tx_timestamp(skb); - - /* give packets to NIC */ - tx->req += nsegs; + if (tx->raw_addressing) + nsegs = gve_tx_add_skb_no_copy(priv, tx, skb); + else + nsegs = gve_tx_add_skb_copy(priv, tx, skb); + + /* If the packet is getting sent, we need to update the skb */ + if (nsegs) { + netdev_tx_sent_queue(tx->netdev_txq, skb->len); + skb_tx_timestamp(skb); + + /* Give packets to NIC. Even if this packet failed to send the doorbell + * might need to be rung because of xmit_more. + */ + tx->req += nsegs; - if (!netif_xmit_stopped(tx->netdev_txq) && netdev_xmit_more()) - return NETDEV_TX_OK; + if (!netif_xmit_stopped(tx->netdev_txq) && netdev_xmit_more()) + return NETDEV_TX_OK; - gve_tx_put_doorbell(priv, tx->q_resources, tx->req); + gve_tx_put_doorbell(priv, tx->q_resources, tx->req); + } return NETDEV_TX_OK; } @@ -525,24 +647,29 @@ static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx, info = &tx->info[idx]; skb = info->skb; + /* Unmap the buffer */ + if (tx->raw_addressing) + gve_tx_unmap_buf(tx->dev, info); + tx->done++; /* Mark as free */ if (skb) { info->skb = NULL; bytes += skb->len; pkts++; dev_consume_skb_any(skb); + if (tx->raw_addressing) + continue; /* FIFO free */ for (i = 0; i < ARRAY_SIZE(info->iov); i++) { - space_freed += info->iov[i].iov_len + - info->iov[i].iov_padding; + space_freed += info->iov[i].iov_len + info->iov[i].iov_padding; info->iov[i].iov_len = 0; info->iov[i].iov_padding = 0; } } - tx->done++; } - gve_tx_free_fifo(&tx->tx_fifo, space_freed); + if (!tx->raw_addressing) + gve_tx_free_fifo(&tx->tx_fifo, space_freed); u64_stats_update_begin(&tx->statss); tx->bytes_done += bytes; tx->pkt_done += pkts;