From patchwork Tue Nov 3 17:46:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 315721 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C986C388F7 for ; Tue, 3 Nov 2020 17:46:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E19F421D91 for ; Tue, 3 Nov 2020 17:46:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="o6KWEUcO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728742AbgKCRq6 (ORCPT ); Tue, 3 Nov 2020 12:46:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727688AbgKCRq5 (ORCPT ); Tue, 3 Nov 2020 12:46:57 -0500 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A88FC0613D1 for ; Tue, 3 Nov 2020 09:46:57 -0800 (PST) Received: by mail-qv1-xf4a.google.com with SMTP id l16so9574653qvt.17 for ; Tue, 03 Nov 2020 09:46:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=8g52K0cJFBVuvRZ/Kuf+ftq8TI+Voabo2HysgByQvPE=; b=o6KWEUcO4LbNXkk2h+MHC9p0xkMNToDW+dg4Ku2x4ZceKRL2D1psMo8Hw1s68IFG1Y e9DIcw6WJo5gTRL30ji0qp65Um6LUCkjaWDH4IeIXcZ9s3mwiuHBDcmQNLRWyWQbMKg4 c14CXk9bbp7eNwCoBf33PIEQFqf2AzfnObybOFd93EJGx2/5kbMg+ZkMTrCwOT6NnIrU ObW+wUlq2Sywv8kDKQmna+dBFXKTZ9uBgSh5DxLFDR2htB3Af2TkzB+3/51+JmlXMZ5Y KZ2Hvmkwwa66kQMy8X6hwscipifRw0EyOR92WjEjGCwpsjluudc/A2DTZJOgK5YEnNUt TuyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8g52K0cJFBVuvRZ/Kuf+ftq8TI+Voabo2HysgByQvPE=; b=DG81x/LQ+LXykNINbrsVDgJuWQr9006pKpoeSfeYjMt3Xrzwogb/FhWZIYRvuKucMF y7gwkWouUUKpQ/uMf6sOVCyElB80SHJ3gxjQHeHT1C6bCr2ke6MrhHDHwXPx9GlJXbDF +xBMcnSg69PIUMxyxh5qgPynVopF7oO0yUagNXbIBHv1HcPN1XvJqpcIRP3nXSICbQxS J4Gg5Na1ac1pvqKpGJ1i+D1+EszAbeCppqSO/6+X9LCKOXctluX6LyYbTkuQSW6atByT gqHgWhXUUTE2Jzx9tpqzp2tDXTbxlB8aoctglku0QWLxzOnotUPPEX83sef9EL/OTvX+ wV7A== X-Gm-Message-State: AOAM531IzcXw+hJ0N2o4mqiMmowtc7aIEC7lpq37HtVm94fpd6y5tDHC QgZeYOesrYRDbSale/UjGuzI6VN5mwiVX53KVEpANEmDVfbn8ypMv8XkN8HxxRzWpav9Pds/ix8 9vJFKdu1BHSCNxJXFlPs1qbG5BLhW0rmEssxtgjDF3hG4mMWxopjk8U9JlfO12PKp0edR2nra X-Google-Smtp-Source: ABdhPJzTroEwKsjUrXUm7qmgwVYaWgKuW791ptpIOkll1SI1i4c65C5B+PgvnG+rJGWML5VbSdVUyOQAGnLVVjBV Sender: "awogbemila via sendgmr" X-Received: from awogbemila.sea.corp.google.com ([2620:15c:100:202:1ea0:b8ff:fe73:6cc0]) (user=awogbemila job=sendgmr) by 2002:ad4:456c:: with SMTP id o12mr28780143qvu.48.1604425616479; Tue, 03 Nov 2020 09:46:56 -0800 (PST) Date: Tue, 3 Nov 2020 09:46:48 -0800 In-Reply-To: <20201103174651.590586-1-awogbemila@google.com> Message-Id: <20201103174651.590586-2-awogbemila@google.com> Mime-Version: 1.0 References: <20201103174651.590586-1-awogbemila@google.com> X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH 1/4] gve: Add support for raw addressing device option From: David Awogbemila To: netdev@vger.kernel.org Cc: Catherine Sullivan , Yangchun Fu , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Catherine Sullivan Add support to describe device for parsing device options. As the first device option, add raw addressing. "Raw Addressing" mode (as opposed to the current "qpl" mode) is an operational mode which allows the driver avoid bounce buffer copies which it currently performs using pre-allocated qpls (queue_page_lists) when sending and receiving packets. For egress packets, the provided skb data addresses will be dma_map'ed and passed to the device, allowing the NIC can perform DMA directly - the driver will not have to copy the buffer content into pre-allocated buffers/qpls (as in qpl mode). For ingress packets, copies are also eliminated as buffers are handed to the networking stack and then recycled or re-allocated as necessary, avoiding the use of skb_copy_to_linear_data(). This patch only introduces the option to the driver. Subsequent patches will add the ingress and egress functionality. Reviewed-by: Yangchun Fu Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 1 + drivers/net/ethernet/google/gve/gve_adminq.c | 52 ++++++++++++++++++++ drivers/net/ethernet/google/gve/gve_adminq.h | 15 ++++-- drivers/net/ethernet/google/gve/gve_main.c | 9 ++++ 4 files changed, 73 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index f5c80229ea96..80cdae06ee39 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -199,6 +199,7 @@ struct gve_priv { u64 num_registered_pages; /* num pages registered with NIC */ u32 rx_copybreak; /* copy packets smaller than this */ u16 default_num_queues; /* default num queues to set up */ + bool raw_addressing; /* true if this dev supports raw addressing */ struct gve_queue_config tx_cfg; struct gve_queue_config rx_cfg; diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 24ae6a28a806..0b7a2653fe33 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -460,11 +460,14 @@ int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 num_queues) int gve_adminq_describe_device(struct gve_priv *priv) { struct gve_device_descriptor *descriptor; + struct gve_device_option *dev_opt; union gve_adminq_command cmd; dma_addr_t descriptor_bus; + u16 num_options; int err = 0; u8 *mac; u16 mtu; + int i; memset(&cmd, 0, sizeof(cmd)); descriptor = dma_alloc_coherent(&priv->pdev->dev, PAGE_SIZE, @@ -518,6 +521,55 @@ int gve_adminq_describe_device(struct gve_priv *priv) priv->rx_desc_cnt = priv->rx_pages_per_qpl; } priv->default_num_queues = be16_to_cpu(descriptor->default_num_queues); + dev_opt = (void *)(descriptor + 1); + + num_options = be16_to_cpu(descriptor->num_device_options); + for (i = 0; i < num_options; i++) { + u16 option_length = be16_to_cpu(dev_opt->option_length); + u16 option_id = be16_to_cpu(dev_opt->option_id); + void *option_end; + + option_end = (void *)dev_opt + sizeof(*dev_opt) + option_length; + if (option_end > (void *)descriptor + be16_to_cpu(descriptor->total_length)) { + dev_err(&priv->dev->dev, + "options exceed device_descriptor's total length.\n"); + err = -EINVAL; + goto free_device_descriptor; + } + + switch (option_id) { + case GVE_DEV_OPT_ID_RAW_ADDRESSING: + /* If the length or feature mask doesn't match, + * continue without enabling the feature. + */ + if (option_length != GVE_DEV_OPT_LEN_RAW_ADDRESSING || + dev_opt->feat_mask != + cpu_to_be32(GVE_DEV_OPT_FEAT_MASK_RAW_ADDRESSING)) { + dev_warn(&priv->pdev->dev, + "Raw addressing option error:\n" + " Expected: length=%d, feature_mask=%x.\n" + " Actual: length=%d, feature_mask=%x.\n", + GVE_DEV_OPT_LEN_RAW_ADDRESSING, + cpu_to_be32(GVE_DEV_OPT_FEAT_MASK_RAW_ADDRESSING), + option_length, dev_opt->feat_mask); + priv->raw_addressing = false; + } else { + dev_info(&priv->pdev->dev, + "Raw addressing device option enabled.\n"); + priv->raw_addressing = true; + } + break; + default: + /* If we don't recognize the option just continue + * without doing anything. + */ + dev_dbg(&priv->pdev->dev, + "Unrecognized device option 0x%hx not enabled.\n", + option_id); + break; + } + dev_opt = (void *)dev_opt + sizeof(*dev_opt) + option_length; + } free_device_descriptor: dma_free_coherent(&priv->pdev->dev, sizeof(*descriptor), descriptor, diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h index 281de8326bc5..af5f586167bd 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -79,12 +79,17 @@ struct gve_device_descriptor { static_assert(sizeof(struct gve_device_descriptor) == 40); -struct device_option { - __be32 option_id; - __be32 option_length; +struct gve_device_option { + __be16 option_id; + __be16 option_length; + __be32 feat_mask; }; -static_assert(sizeof(struct device_option) == 8); +static_assert(sizeof(struct gve_device_option) == 8); + +#define GVE_DEV_OPT_ID_RAW_ADDRESSING 0x1 +#define GVE_DEV_OPT_LEN_RAW_ADDRESSING 0x0 +#define GVE_DEV_OPT_FEAT_MASK_RAW_ADDRESSING 0x0 struct gve_adminq_configure_device_resources { __be64 counter_array; @@ -111,6 +116,8 @@ struct gve_adminq_unregister_page_list { static_assert(sizeof(struct gve_adminq_unregister_page_list) == 4); +#define GVE_RAW_ADDRESSING_QPL_ID 0xFFFFFFFF + struct gve_adminq_create_tx_queue { __be32 queue_id; __be32 reserved; diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 48a433154ce0..70685c10db0e 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -678,6 +678,10 @@ static int gve_alloc_qpls(struct gve_priv *priv) int i, j; int err; + /* Raw addressing means no QPLs */ + if (priv->raw_addressing) + return 0; + priv->qpls = kvzalloc(num_qpls * sizeof(*priv->qpls), GFP_KERNEL); if (!priv->qpls) return -ENOMEM; @@ -718,6 +722,10 @@ static void gve_free_qpls(struct gve_priv *priv) int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv); int i; + /* Raw addressing means no QPLs */ + if (priv->raw_addressing) + return; + kvfree(priv->qpl_cfg.qpl_id_map); for (i = 0; i < num_qpls; i++) @@ -1078,6 +1086,7 @@ static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device) if (skip_describe_device) goto setup_device; + priv->raw_addressing = false; /* Get the initial information we need from the device */ err = gve_adminq_describe_device(priv); if (err) { From patchwork Tue Nov 3 17:46:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 315720 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC384C388F9 for ; Tue, 3 Nov 2020 17:47:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 631D721D91 for ; Tue, 3 Nov 2020 17:47:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="iV71W+tw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728896AbgKCRrC (ORCPT ); Tue, 3 Nov 2020 12:47:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728877AbgKCRrB (ORCPT ); Tue, 3 Nov 2020 12:47:01 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DED6C061A04 for ; Tue, 3 Nov 2020 09:47:01 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id x5so11306293qkn.2 for ; Tue, 03 Nov 2020 09:47:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=bMZb9mwyOJY38gfdzXXjSIBkTAilg4nHedpeg0wrhW8=; b=iV71W+twJgE9xrm4FeyD3SZoAPA07zP/DtyFxLrgT5zeynrdxms+IX/89tA22lkUK2 qMd7+HKvaJUnWwqLA6ZPyx8jl1TvE5QTESSN2645dypaPL8Bkg7n3GDjNkSfdRs89SmH htgdVwRYKAz90Pd6GcTEbnLP2dbCMF9/hRQaIcEKG97d0Ck6k2hMhdlFGHW9g1QuutbU yXp0Pb2V0YClxYcL2e7aOCLHsnTC0LwSWpMJO6cCVUrAhO+jFl+BpPlHXRDhL/z6bjKt OEdpmsa3uq8zBLc8kB/0EgjHCAQQO57b3R1gAdAJuFrePojPONQ172AqqXeYzdSAmNM+ 3SAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bMZb9mwyOJY38gfdzXXjSIBkTAilg4nHedpeg0wrhW8=; b=lZ2r459JZnCSqoD7DrJGOAL0ON2nNN3Uh4ktzaDc55dO+jzGxruBnTkEwuC/YhquH3 wUW7DgOGeJq6YyMLRk+m0Gk8XuURiBTf40342ghYrX34u2WxAHqNcXNBNdF3pzIdp8Fv DR/l5A8Z0a6JJavfhjwDtJVXaS7kyPtWia3GDaQdD8j7Xwdok2t27TigsjZgGNeAe01s SbwFBQ6Eu1epviKlYMZYdK8n0gGr9Dh0ALiiGfb0Rfyd+Sn79rh/371iDWFhUaDe0bCZ 1VOEbvkfMNIQHiO2ef2enc3nmp5AaAfhXhEFx3vLtahsiXyvUGrls9mcDjwvTHU5trb4 ugLw== X-Gm-Message-State: AOAM530KEFk9dehxRYeLFExvF6cNziIxeB6x77E7u8pO13J8mzqA0FRA gDs7V9PNgiVL1IpCtsff/ht8UeehXqRsq+hGGDx1SMfR/OCG4ux/zEo1/v7rUJFUkLri4V5KEBp uMkWa1SxJ3cV55snGr04ioHFdGRE19Jc9xPEWaGGHvdzzp2YtLfum+1Om9H4VodFtPCXLP8US X-Google-Smtp-Source: ABdhPJz8yYBfXLjULr0023A8646Rryo+FmmiRV59SxqUiakL4j8UNGA7j0tjfPqioDy56oAodgraWBE5azD70K4f Sender: "awogbemila via sendgmr" X-Received: from awogbemila.sea.corp.google.com ([2620:15c:100:202:1ea0:b8ff:fe73:6cc0]) (user=awogbemila job=sendgmr) by 2002:ad4:4e84:: with SMTP id dy4mr15829508qvb.47.1604425620360; Tue, 03 Nov 2020 09:47:00 -0800 (PST) Date: Tue, 3 Nov 2020 09:46:50 -0800 In-Reply-To: <20201103174651.590586-1-awogbemila@google.com> Message-Id: <20201103174651.590586-4-awogbemila@google.com> Mime-Version: 1.0 References: <20201103174651.590586-1-awogbemila@google.com> X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH 3/4] gve: Rx Buffer Recycling From: David Awogbemila To: netdev@vger.kernel.org Cc: David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch lets the driver reuse buffers that have been freed by the networking stack. In the raw addressing case, this allows the driver avoid allocating new buffers. In the qpl case, the driver can avoid copies. Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 10 +- drivers/net/ethernet/google/gve/gve_rx.c | 194 +++++++++++++++-------- 2 files changed, 129 insertions(+), 75 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 5c48f9e2d04c..d1a2a526a250 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -50,6 +50,7 @@ struct gve_rx_slot_page_info { struct page *page; void *page_address; u32 page_offset; /* offset to write to in page */ + bool can_flip; }; /* A list of pages registered with the device during setup and used by a queue @@ -503,15 +504,6 @@ static inline enum dma_data_direction gve_qpl_dma_dir(struct gve_priv *priv, return DMA_FROM_DEVICE; } -/* Returns true if the max mtu allows page recycling */ -static inline bool gve_can_recycle_pages(struct net_device *dev) -{ - /* We can't recycle the pages if we can't fit a packet into half a - * page. - */ - return dev->max_mtu <= PAGE_SIZE / 2; -} - /* buffers */ int gve_alloc_page(struct gve_priv *priv, struct device *dev, struct page **page, dma_addr_t *dma, diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 6d1ca6985eec..2e9d5c6fc411 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -270,8 +270,7 @@ static enum pkt_hash_types gve_rss_type(__be16 pkt_flags) return PKT_HASH_TYPE_L2; } -static struct sk_buff *gve_rx_copy(struct gve_rx_ring *rx, - struct net_device *dev, +static struct sk_buff *gve_rx_copy(struct net_device *dev, struct napi_struct *napi, struct gve_rx_slot_page_info *page_info, u16 len) @@ -289,10 +288,6 @@ static struct sk_buff *gve_rx_copy(struct gve_rx_ring *rx, skb->protocol = eth_type_trans(skb, dev); - u64_stats_update_begin(&rx->statss); - rx->rx_copied_pkt++; - u64_stats_update_end(&rx->statss); - return skb; } @@ -338,22 +333,90 @@ static void gve_rx_flip_buff(struct gve_rx_slot_page_info *page_info, { u64 addr = be64_to_cpu(data_ring->addr); + /* "flip" to other packet buffer on this page */ page_info->page_offset ^= PAGE_SIZE / 2; addr ^= PAGE_SIZE / 2; data_ring->addr = cpu_to_be64(addr); } +static bool gve_rx_can_flip_buffers(struct net_device *netdev) +{ +#if PAGE_SIZE == 4096 + /* We can't flip a buffer if we can't fit a packet + * into half a page. + */ + return netdev->mtu + GVE_RX_PAD + ETH_HLEN <= PAGE_SIZE / 2; +#else + /* PAGE_SIZE != 4096 - don't try to reuse */ + return false; +#endif +} + +static int gve_rx_can_recycle_buffer(struct page *page) +{ + int pagecount = page_count(page); + + /* This page is not being used by any SKBs - reuse */ + if (pagecount == 1) + return 1; + /* This page is still being used by an SKB - we can't reuse */ + else if (pagecount >= 2) + return 0; + WARN(pagecount < 1, "Pagecount should never be < 1"); + return -1; +} + static struct sk_buff * gve_rx_raw_addressing(struct device *dev, struct net_device *netdev, struct gve_rx_slot_page_info *page_info, u16 len, struct napi_struct *napi, - struct gve_rx_data_slot *data_slot) + struct gve_rx_data_slot *data_slot, bool can_flip) { - struct sk_buff *skb = gve_rx_add_frags(napi, page_info, len); + struct sk_buff *skb; + skb = gve_rx_add_frags(napi, page_info, len); if (!skb) return NULL; + /* Optimistically stop the kernel from freeing the page by increasing + * the page bias. We will check the refcount in refill to determine if + * we need to alloc a new page. + */ + get_page(page_info->page); + page_info->can_flip = can_flip; + + return skb; +} + +static struct sk_buff * +gve_rx_qpl(struct device *dev, struct net_device *netdev, + struct gve_rx_ring *rx, struct gve_rx_slot_page_info *page_info, + u16 len, struct napi_struct *napi, + struct gve_rx_data_slot *data_slot, bool recycle) +{ + struct sk_buff *skb; + + /* if raw_addressing mode is not enabled gvnic can only receive into + * registered segments. If the buffer can't be recycled, our only + * choice is to copy the data out of it so that we can return it to the + * device. + */ + if (recycle) { + skb = gve_rx_add_frags(napi, page_info, len); + /* No point in recycling if we didn't get the skb */ + if (skb) { + /* Make sure the networking stack can't free the page */ + get_page(page_info->page); + gve_rx_flip_buff(page_info, data_slot); + } + } else { + skb = gve_rx_copy(netdev, napi, page_info, len); + if (skb) { + u64_stats_update_begin(&rx->statss); + rx->rx_copied_pkt++; + u64_stats_update_end(&rx->statss); + } + } return skb; } @@ -367,7 +430,6 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, struct gve_rx_data_slot *data_slot; struct sk_buff *skb = NULL; dma_addr_t page_bus; - int pagecount; u16 len; /* drop this packet */ @@ -388,64 +450,36 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, dma_sync_single_for_cpu(&priv->pdev->dev, page_bus, PAGE_SIZE, DMA_FROM_DEVICE); - if (PAGE_SIZE == 4096) { - if (len <= priv->rx_copybreak) { - /* Just copy small packets */ - skb = gve_rx_copy(rx, dev, napi, page_info, len); - u64_stats_update_begin(&rx->statss); - rx->rx_copybreak_pkt++; - u64_stats_update_end(&rx->statss); - goto have_skb; + if (len <= priv->rx_copybreak) { + /* Just copy small packets */ + skb = gve_rx_copy(dev, napi, page_info, len); + u64_stats_update_begin(&rx->statss); + rx->rx_copied_pkt++; + rx->rx_copybreak_pkt++; + u64_stats_update_end(&rx->statss); + } else { + bool can_flip = gve_rx_can_flip_buffers(dev); + int recycle = 0; + + if (can_flip) { + recycle = gve_rx_can_recycle_buffer(page_info->page); + if (recycle < 0) { + gve_schedule_reset(priv); + return false; + } } if (rx->data.raw_addressing) { skb = gve_rx_raw_addressing(&priv->pdev->dev, dev, page_info, len, napi, - data_slot); - goto have_skb; - } - if (unlikely(!gve_can_recycle_pages(dev))) { - skb = gve_rx_copy(rx, dev, napi, page_info, len); - goto have_skb; - } - pagecount = page_count(page_info->page); - if (pagecount == 1) { - /* No part of this page is used by any SKBs; we attach - * the page fragment to a new SKB and pass it up the - * stack. - */ - skb = gve_rx_add_frags(napi, page_info, len); - if (!skb) { - u64_stats_update_begin(&rx->statss); - rx->rx_skb_alloc_fail++; - u64_stats_update_end(&rx->statss); - return false; - } - /* Make sure the kernel stack can't release the page */ - get_page(page_info->page); - /* "flip" to other packet buffer on this page */ - gve_rx_flip_buff(page_info, &rx->data.data_ring[idx]); - } else if (pagecount >= 2) { - /* We have previously passed the other half of this - * page up the stack, but it has not yet been freed. - */ - skb = gve_rx_copy(rx, dev, napi, page_info, len); + data_slot, + can_flip && recycle); } else { - WARN(pagecount < 1, "Pagecount should never be < 1"); - return false; + skb = gve_rx_qpl(&priv->pdev->dev, dev, rx, + page_info, len, napi, data_slot, + can_flip && recycle); } - } else { - if (rx->data.raw_addressing) - skb = gve_rx_raw_addressing(&priv->pdev->dev, dev, - page_info, len, napi, - data_slot); - else - skb = gve_rx_copy(rx, dev, napi, page_info, len); } -have_skb: - /* We didn't manage to allocate an skb but we haven't had any - * reset worthy failures. - */ if (!skb) { u64_stats_update_begin(&rx->statss); rx->rx_skb_alloc_fail++; @@ -498,16 +532,44 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx) while (empty || ((fill_cnt & rx->mask) != (rx->cnt & rx->mask))) { struct gve_rx_slot_page_info *page_info; - struct device *dev = &priv->pdev->dev; - struct gve_rx_data_slot *data_slot; u32 idx = fill_cnt & rx->mask; page_info = &rx->data.page_info[idx]; - data_slot = &rx->data.data_ring[idx]; - gve_rx_free_buffer(dev, page_info, data_slot); - page_info->page = NULL; - if (gve_rx_alloc_buffer(priv, dev, page_info, data_slot, rx)) - break; + if (page_info->can_flip) { + /* The other half of the page is free because it was + * free when we processed the descriptor. Flip to it. + */ + struct gve_rx_data_slot *data_slot = + &rx->data.data_ring[idx]; + + gve_rx_flip_buff(page_info, data_slot); + page_info->can_flip = false; + } else { + /* It is possible that the networking stack has already + * finished processing all outstanding packets in the buffer + * and it can be reused. + * Flipping is unnecessary here - if the networking stack still + * owns half the page it is impossible to tell which half. Either + * the whole page is free or it needs to be replaced. + */ + int recycle = gve_rx_can_recycle_buffer(page_info->page); + + if (recycle < 0) { + gve_schedule_reset(priv); + return false; + } + if (!recycle) { + /* We can't reuse the buffer - alloc a new one*/ + struct gve_rx_data_slot *data_slot = + &rx->data.data_ring[idx]; + struct device *dev = &priv->pdev->dev; + + gve_rx_free_buffer(dev, page_info, data_slot); + page_info->page = NULL; + if (gve_rx_alloc_buffer(priv, dev, page_info, data_slot, rx)) + break; + } + } empty = false; fill_cnt++; }