From patchwork Tue Aug 18 19:44:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 262347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02268C433E1 for ; Tue, 18 Aug 2020 19:44:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D006C2076E for ; Tue, 18 Aug 2020 19:44:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rV92R3CP" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726728AbgHRTo3 (ORCPT ); Tue, 18 Aug 2020 15:44:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726630AbgHRTo2 (ORCPT ); Tue, 18 Aug 2020 15:44:28 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B3B5C061389 for ; Tue, 18 Aug 2020 12:44:28 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id y13so12830510plr.1 for ; Tue, 18 Aug 2020 12:44:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=YrXNuJTONk5BCI0nG5/yThoiPSPAH2FycFlnQuLaGqQ=; b=rV92R3CP9Rz6w+EIWEBcRKuXT4F3pMtyDnh3NYyh+a48EoZxKEtqv8/lkW81PGfkwZ bHh3BjmbH2ltbeDvxlXAkEuNds4t5lQYqtGhXAQSHZ2a3U10FYIQ5xjA5r/D6ZdSiugC iRSFSWqwvhS9g4qd5cO5tvjSiTj3CowMFczBdb1QEwyXjEvsb+c6ScWsTzYqN07+bR8N GEzU57hPlFZBmY6aInP/wdT23eGHxea/4lqs4mD0ZiCSHmVJX1VPm2a0F70NSDI9s2ad jOuViq35afA/qlprWThl+uXjKysrUrgAhbI2yXeyMQQsF/Bb+GpFqBtNm3+qVMewm6LA NXxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=YrXNuJTONk5BCI0nG5/yThoiPSPAH2FycFlnQuLaGqQ=; b=iPF9Wg1fR31n+c+Y1MiPac6+ScDh+W8GZ+LDl3tTBdzoD7itjRtxa4IeW4ThgJTo1/ SYgRR2Dk2RXusQP/PCVyE9KR/EXcsDJ7RHzgzPmFkTOviYu7M+iT2D9+qlNyHTD9Z2eZ YdgwcOM68vUCG5SdwUMkL5ZtgGr9JPUTRx0/qTod8hZW9DcB8lD2d2rb6pWQFihFAqMm 1l13+Iwzm262L6lKpBPHPCRHypadkQepALWLWg+4WnXYQlSVKgnB9yoa3Kdko47JHo2u tulXoRZiayLQMcLtaLMm1W0dzzIfxfvuXCDiyVBnPen3+7/mCP2gGZQ0/g9eM6gaM8O/ nSBw== X-Gm-Message-State: AOAM530wT63au28mhxI/rN6mM898To5zm/C0wk5110xuTi3cpVNRgiDl vrZxPxsE9TiJpWI27z7H28Ney867DLmVrJQaj+ckoy9fNV1AVcLuvyxAPgw27AFZdeiph0uItCO ykCcujAMjNOAlhMz2xWXrEPqsB1sDSZDdJmAI5y9RdYzRdYcAq1aWB7yVNiKoNMg6CU2Gb7nO X-Google-Smtp-Source: ABdhPJzynsyfSk6v46kqj/3TZWzbtNcRRZjmZC9cMQse1hEYI1xn3K+l1/m2USiT01gVl1waE8vRfniYPyWD/oQE X-Received: by 2002:a17:90b:208:: with SMTP id fy8mr1136748pjb.131.1597779866394; Tue, 18 Aug 2020 12:44:26 -0700 (PDT) Date: Tue, 18 Aug 2020 12:44:00 -0700 In-Reply-To: <20200818194417.2003932-1-awogbemila@google.com> Message-Id: <20200818194417.2003932-2-awogbemila@google.com> Mime-Version: 1.0 References: <20200818194417.2003932-1-awogbemila@google.com> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog Subject: [PATCH net-next 01/18] gve: Get and set Rx copybreak via ethtool From: David Awogbemila To: netdev@vger.kernel.org Cc: Kuo Zhao , Yangchun Fu , David Awogbemila Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Kuo Zhao This adds support for getting and setting the RX copybreak value via ethtool. Reviewed-by: Yangchun Fu Signed-off-by: Kuo Zhao Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve_ethtool.c | 34 +++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index d8fa816f4473..469d3332bcd6 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -230,6 +230,38 @@ static int gve_user_reset(struct net_device *netdev, u32 *flags) return -EOPNOTSUPP; } +static int gve_get_tunable(struct net_device *netdev, + const struct ethtool_tunable *etuna, void *value) +{ + struct gve_priv *priv = netdev_priv(netdev); + + switch (etuna->id) { + case ETHTOOL_RX_COPYBREAK: + *(u32 *)value = priv->rx_copybreak; + return 0; + default: + return -EINVAL; + } +} + +static int gve_set_tunable(struct net_device *netdev, + const struct ethtool_tunable *etuna, const void *value) +{ + struct gve_priv *priv = netdev_priv(netdev); + u32 len; + + switch (etuna->id) { + case ETHTOOL_RX_COPYBREAK: + len = *(u32 *)value; + if (len > PAGE_SIZE / 2) + return -EINVAL; + priv->rx_copybreak = len; + return 0; + default: + return -EINVAL; + } +} + const struct ethtool_ops gve_ethtool_ops = { .get_drvinfo = gve_get_drvinfo, .get_strings = gve_get_strings, @@ -242,4 +274,6 @@ const struct ethtool_ops gve_ethtool_ops = { .get_link = ethtool_op_get_link, .get_ringparam = gve_get_ringparam, .reset = gve_user_reset, + .get_tunable = gve_get_tunable, + .set_tunable = gve_set_tunable, }; From patchwork Tue Aug 18 19:44:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 262346 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2B7BC433E1 for ; Tue, 18 Aug 2020 19:44:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 86340206B5 for ; Tue, 18 Aug 2020 19:44:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mfR+R6xl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726741AbgHRTon (ORCPT ); Tue, 18 Aug 2020 15:44:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726630AbgHRToe (ORCPT ); Tue, 18 Aug 2020 15:44:34 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4282C061389 for ; Tue, 18 Aug 2020 12:44:33 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id a6so20792pjd.1 for ; Tue, 18 Aug 2020 12:44:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=7r/Cv7mUYExUV6yXLs8q8FwJJe7X91sTEib5RQYrszA=; b=mfR+R6xlUgmK/G0cn5e36DABL3h9Vhun2/FrntPK9zOv/UxioZg6Ph5/amv3CCV+os D+kH/TAGZyxssnQcIkUNDgTIrI6pCtpQrLyCooztCrn2Rt5rb6xBo9EDWl5v4gUH2SHT r2GLPnTq/1oXRQDaRJ2w1zoLLaLC2OIy6ebp0gmSp9KBG9OI6g/Dm8/eiy5v+hs5H5O6 xqLKiHwH1TobkiYJd1sC5aWXvvcY29dZGyKDz8pLztkD1vRoFwaDAsHTX4Yiw9JAwKg9 qymSi2OA+SAg0qnT82YBhGOZEmWhNE0mKRDm1kVEHzcnJ7UXZHiRsqtRYKft+f4DfG6r RKNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7r/Cv7mUYExUV6yXLs8q8FwJJe7X91sTEib5RQYrszA=; b=FNwzviysPTs9DlAuXptswaZreHl5BGgQOY1Wz8KIsLAIFeDZu3sjKLwL3v1M9+hsap XBVYgheZqnROUhg8uR4zJaUIEifvhaA76tIBqp4YCnaIY8LTaT5yVNMDVBZuXhKVAD+g 7hKDCLYCz/QXbfFF554CllfC51W8QNWXB7qaVT5yt1Lpp4OS+kMgfe6a9zaqCVpEFyEv zjqmVPA3HhJ7C+Vt1RITlM54PYYp1d0Q/ji6m24tibPTlUbmA4JVjIOyIEJrt7q0KdzH rWv8GWkHzriONq52l6ENquTYENo07OnlC0wq12uxHfLzbPHU6LapmrQqqz2+FyBUATtV Z4mw== X-Gm-Message-State: AOAM533tAypZ6Ou4zS2syGneqZgB9aNC1Tf4FndFBBp9UslNcO2r4YSg wuZq8dkFoVVpMXHHsmEu62/SXe64UYN1kDMzEHZca4PA5/xx+v3/1MHkT0NiYi7qetHPWhouJZl zFXhxL8BX4/yc5OlBIZmmEpzhU51mvY9yltG2TLWFJwHWQmY9X+ybnAbKH1uR8KuQ9isUlAVu X-Google-Smtp-Source: ABdhPJxueTeFDWbwS5QuAj0WDAOaxoSOuStakmW5MNWa0ZANCNZxI8t3NXSb4MazszjoC5STMGVkatG0djRnKKkA X-Received: by 2002:a65:4808:: with SMTP id h8mr14584683pgs.113.1597779873002; Tue, 18 Aug 2020 12:44:33 -0700 (PDT) Date: Tue, 18 Aug 2020 12:44:02 -0700 In-Reply-To: <20200818194417.2003932-1-awogbemila@google.com> Message-Id: <20200818194417.2003932-4-awogbemila@google.com> Mime-Version: 1.0 References: <20200818194417.2003932-1-awogbemila@google.com> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog Subject: [PATCH net-next 03/18] gve: Register netdev earlier From: David Awogbemila To: netdev@vger.kernel.org Cc: Catherine Sullivan , Yangchun Fu , David Awogbemila Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Catherine Sullivan Move the registration of the netdev up to initialize it before doing any logging so that the correct device name is displayed. dev_info/err should then be used, not netif_info/err. Reviewed-by: Yangchun Fu Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve_adminq.c | 15 +++++------ drivers/net/ethernet/google/gve/gve_main.c | 27 ++++++++++++-------- 2 files changed, 23 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 9dbfcfb8cf63..6a93fe8e6a2b 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -334,8 +334,7 @@ int gve_adminq_describe_device(struct gve_priv *priv) priv->tx_desc_cnt = be16_to_cpu(descriptor->tx_queue_entries); if (priv->tx_desc_cnt * sizeof(priv->tx->desc[0]) < PAGE_SIZE) { - netif_err(priv, drv, priv->dev, "Tx desc count %d too low\n", - priv->tx_desc_cnt); + dev_err(&priv->pdev->dev, "Tx desc count %d too low\n", priv->tx_desc_cnt); err = -EINVAL; goto free_device_descriptor; } @@ -344,8 +343,7 @@ int gve_adminq_describe_device(struct gve_priv *priv) < PAGE_SIZE || priv->rx_desc_cnt * sizeof(priv->rx->data.data_ring[0]) < PAGE_SIZE) { - netif_err(priv, drv, priv->dev, "Rx desc count %d too low\n", - priv->rx_desc_cnt); + dev_err(&priv->pdev->dev, "Rx desc count %d too low\n", priv->rx_desc_cnt); err = -EINVAL; goto free_device_descriptor; } @@ -353,8 +351,7 @@ int gve_adminq_describe_device(struct gve_priv *priv) be64_to_cpu(descriptor->max_registered_pages); mtu = be16_to_cpu(descriptor->mtu); if (mtu < ETH_MIN_MTU) { - netif_err(priv, drv, priv->dev, "MTU %d below minimum MTU\n", - mtu); + dev_err(&priv->pdev->dev, "MTU %d below minimum MTU\n", mtu); err = -EINVAL; goto free_device_descriptor; } @@ -362,12 +359,12 @@ int gve_adminq_describe_device(struct gve_priv *priv) priv->num_event_counters = be16_to_cpu(descriptor->counters); ether_addr_copy(priv->dev->dev_addr, descriptor->mac); mac = descriptor->mac; - netif_info(priv, drv, priv->dev, "MAC addr: %pM\n", mac); + dev_info(&priv->pdev->dev, "MAC addr: %pM\n", mac); priv->tx_pages_per_qpl = be16_to_cpu(descriptor->tx_pages_per_qpl); priv->rx_pages_per_qpl = be16_to_cpu(descriptor->rx_pages_per_qpl); if (priv->rx_pages_per_qpl < priv->rx_desc_cnt) { - netif_err(priv, drv, priv->dev, "rx_pages_per_qpl cannot be smaller than rx_desc_cnt, setting rx_desc_cnt down to %d.\n", - priv->rx_pages_per_qpl); + dev_err(&priv->pdev->dev, "rx_pages_per_qpl cannot be smaller than rx_desc_cnt, setting rx_desc_cnt down to %d.\n", + priv->rx_pages_per_qpl); priv->rx_desc_cnt = priv->rx_pages_per_qpl; } priv->default_num_queues = be16_to_cpu(descriptor->default_num_queues); diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 4f6c1fc9c58d..b0c1cfe44a73 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -930,7 +930,7 @@ static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device) priv->dev->max_mtu = PAGE_SIZE; err = gve_adminq_set_mtu(priv, priv->dev->mtu); if (err) { - netif_err(priv, drv, priv->dev, "Could not set mtu"); + dev_err(&priv->pdev->dev, "Could not set mtu"); goto err; } } @@ -970,10 +970,10 @@ static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device) priv->rx_cfg.num_queues); } - netif_info(priv, drv, priv->dev, "TX queues %d, RX queues %d\n", - priv->tx_cfg.num_queues, priv->rx_cfg.num_queues); - netif_info(priv, drv, priv->dev, "Max TX queues %d, Max RX queues %d\n", - priv->tx_cfg.max_queues, priv->rx_cfg.max_queues); + dev_info(&priv->pdev->dev, "TX queues %d, RX queues %d\n", + priv->tx_cfg.num_queues, priv->rx_cfg.num_queues); + dev_info(&priv->pdev->dev, "Max TX queues %d, Max RX queues %d\n", + priv->tx_cfg.max_queues, priv->rx_cfg.max_queues); setup_device: err = gve_setup_device_resources(priv); @@ -1133,7 +1133,9 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto abort_with_db_bar; } SET_NETDEV_DEV(dev, &pdev->dev); + pci_set_drvdata(pdev, dev); + dev->ethtool_ops = &gve_ethtool_ops; dev->netdev_ops = &gve_netdev_ops; /* advertise features */ @@ -1160,11 +1162,16 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent) priv->state_flags = 0x0; gve_set_probe_in_progress(priv); + + err = register_netdev(dev); + if (err) + goto abort_with_netdev; + priv->gve_wq = alloc_ordered_workqueue("gve", 0); if (!priv->gve_wq) { dev_err(&pdev->dev, "Could not allocate workqueue"); err = -ENOMEM; - goto abort_with_netdev; + goto abort_while_registered; } INIT_WORK(&priv->service_task, gve_service_task); priv->tx_cfg.max_queues = max_tx_queues; @@ -1174,18 +1181,18 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (err) goto abort_with_wq; - err = register_netdev(dev); - if (err) - goto abort_with_wq; - dev_info(&pdev->dev, "GVE version %s\n", gve_version_str); gve_clear_probe_in_progress(priv); queue_work(priv->gve_wq, &priv->service_task); + return 0; abort_with_wq: destroy_workqueue(priv->gve_wq); +abort_while_registered: + unregister_netdev(dev); + abort_with_netdev: free_netdev(dev); From patchwork Tue Aug 18 19:44:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 262345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4559C433DF for ; Tue, 18 Aug 2020 19:44:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A050020786 for ; Tue, 18 Aug 2020 19:44:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="Ga8ZDpqS" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726778AbgHRTow (ORCPT ); Tue, 18 Aug 2020 15:44:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726735AbgHRToj (ORCPT ); Tue, 18 Aug 2020 15:44:39 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B58EC061343 for ; Tue, 18 Aug 2020 12:44:39 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id w23so5333187pll.21 for ; Tue, 18 Aug 2020 12:44:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=LuFm8c4ufe5wpgh8QLgEb4Aejp7No/rsdwF6bNCiLnE=; b=Ga8ZDpqSdVY9DvhBgFl59Z2sMn5dWXsp9RiBx3CVktAJla/gXgZhSJ6KtFZN1lLkeE pWwDOdCZbhEep1jkbq4PdG0ZAU+oIlEDggly20YOB+AlLlreZFkC2p6nhhoqlGH3zL0/ 4FXgjmjNYA7isBvBEMwkeIvL8GFhwiumoe/vX4fDmTkdsVZOp/0Io7MNHefIKrQMBF5G lots9dkwHdp54mxXMjJzN8MZeWrvA1rX6uDyve7CEDU1aQKy0Tj0XnNco2TN+wdR8zi4 oTJ1iPb67MpLPmgtvizkKu16rxTTn+jNtyTYlo5b2MlXdo2lWXnoLXA714ZTt1/IFVlq GVOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LuFm8c4ufe5wpgh8QLgEb4Aejp7No/rsdwF6bNCiLnE=; b=uaJa2+ReKKeKSq4ctGFwHKMHSEqIcM88jukjin7aqSiJEovfWHAZD2/7g/qn28vnHH GhIsvHQZLNqVT/BWwPjLUVYuJqpar0IQjFztrfJvsTtByAtMWfxIwZv1C+Hf+nLmTc5T WAM3zRaLEKngfuZ6gZphZQD5EETtM1luJzvKu9Rz8afRk4ZhHky215vaJlBUg4UygI/T cMMMBZ0rKqYFVuQQyMmbMBXLi4n6MjcFCU8jqx0eQyc/HDe0FZaKSxut4yTW/Xvq39nz Olq6qPnZQCBYvPBplQumrdTSAS0rtRFrnH1sZrISAedLgev/Fj4eCrvUZ5ze/G7HbmdL 2ASA== X-Gm-Message-State: AOAM530upJiyo7SW7vOAYgTCUQg1kXRqvKoNWQE4shujoy56YkFRG+mc i3G6rbMwJiSwF4ag/2HQ4YVLKKu3HJ6lnv+b+pKCGxN+gfpBpKDE/NkP0p54C61GIFCC+uh19m+ 0OFjJ7eUu7bqOP03Eg3up+pJvwWTt5EMMD1DSIQvnzfB3f9Jpmdpot1mVZ647cY/zst+GleKq X-Google-Smtp-Source: ABdhPJy8VHVitWQFaY/afQcWLE8ywSCnpbo3UBTLwmwbo+qBykplyv4v/OlO7IBjaKigGf1dGFiPD9RJSwIa8WrC X-Received: from awogbemila.sea.corp.google.com ([2620:15c:100:202:1ea0:b8ff:fe73:6cc0]) (user=awogbemila job=sendgmr) by 2002:a17:90a:e687:: with SMTP id s7mr1227873pjy.48.1597779878927; Tue, 18 Aug 2020 12:44:38 -0700 (PDT) Date: Tue, 18 Aug 2020 12:44:04 -0700 In-Reply-To: <20200818194417.2003932-1-awogbemila@google.com> Message-Id: <20200818194417.2003932-6-awogbemila@google.com> Mime-Version: 1.0 References: <20200818194417.2003932-1-awogbemila@google.com> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog Subject: [PATCH net-next 05/18] gve: Add Gvnic stats AQ command and ethtool show/set-priv-flags. From: David Awogbemila To: netdev@vger.kernel.org Cc: Kuo Zhao , Yangchun Fu , David Awogbemila Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Kuo Zhao Changes: - Add a new flag in service_task_flags. Check both this flag and ethtool flag when handle report stats. Update the stats when user turns ethtool flag on. - In order to expose the NIC stats to the guest even when the ethtool flag is off, share the address and length of report at setup. When the ethtool flag turned off, zero off the gve stats instead of detaching the report. Only detach the report in free_stats_report. - Adds the NIC stats to ethtool stats. These stats are always exposed to guest no matter the report stats flag is turned on or off. - Update gve stats once every 20 seconds. - Add a field for the interval of updating stats report to the AQ command. It will be exposed to USPS so that USPS can use the same interval to update its stats in the report. Reviewed-by: Yangchun Fu Signed-off-by: Kuo Zhao Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 77 ++++++- drivers/net/ethernet/google/gve/gve_adminq.c | 21 ++ drivers/net/ethernet/google/gve/gve_adminq.h | 43 ++++ drivers/net/ethernet/google/gve/gve_ethtool.c | 204 ++++++++++++++++-- drivers/net/ethernet/google/gve/gve_main.c | 147 ++++++++++++- .../net/ethernet/google/gve/gve_register.h | 1 + 6 files changed, 458 insertions(+), 35 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 37a3bbced36a..bc54059f9b2e 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -27,6 +27,17 @@ /* 1 for management, 1 for rx, 1 for tx */ #define GVE_MIN_MSIX 3 +/* Numbers of gve tx/rx stats in stats report. */ +#define GVE_TX_STATS_REPORT_NUM 5 +#define GVE_RX_STATS_REPORT_NUM 2 + +/* Numbers of NIC tx/rx stats in stats report. */ +#define NIC_TX_STATS_REPORT_NUM 0 +#define NIC_RX_STATS_REPORT_NUM 4 + +/* Interval to schedule a service task, 20000ms. */ +#define GVE_SERVICE_TIMER_PERIOD 20000 + /* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */ struct gve_rx_desc_queue { struct gve_rx_desc *desc_ring; /* the descriptor ring */ @@ -220,6 +231,7 @@ struct gve_priv { u32 adminq_destroy_rx_queue_cnt; u32 adminq_dcfg_device_resources_cnt; u32 adminq_set_driver_parameter_cnt; + u32 adminq_report_stats_cnt; /* Global stats */ u32 interface_up_cnt; /* count of times interface turned up since last reset */ @@ -227,27 +239,39 @@ struct gve_priv { u32 reset_cnt; /* count of reset */ u32 page_alloc_fail; /* count of page alloc fails */ u32 dma_mapping_error; /* count of dma mapping errors */ - struct workqueue_struct *gve_wq; struct work_struct service_task; unsigned long service_task_flags; unsigned long state_flags; + struct gve_stats_report *stats_report; + u64 stats_report_len; + dma_addr_t stats_report_bus; /* dma address for the stats report */ + unsigned long ethtool_flags; + + unsigned long service_timer_period; + struct timer_list service_timer; + /* Gvnic device's dma mask, set during probe. */ u8 dma_mask; }; -enum gve_service_task_flags { - GVE_PRIV_FLAGS_DO_RESET = BIT(1), - GVE_PRIV_FLAGS_RESET_IN_PROGRESS = BIT(2), - GVE_PRIV_FLAGS_PROBE_IN_PROGRESS = BIT(3), +enum gve_service_task_flags_bit { + GVE_PRIV_FLAGS_DO_RESET = 1, + GVE_PRIV_FLAGS_RESET_IN_PROGRESS = 2, + GVE_PRIV_FLAGS_PROBE_IN_PROGRESS = 3, + GVE_PRIV_FLAGS_DO_REPORT_STATS = 4, +}; + +enum gve_state_flags_bit { + GVE_PRIV_FLAGS_ADMIN_QUEUE_OK = 1, + GVE_PRIV_FLAGS_DEVICE_RESOURCES_OK = 2, + GVE_PRIV_FLAGS_DEVICE_RINGS_OK = 3, + GVE_PRIV_FLAGS_NAPI_ENABLED = 4, }; -enum gve_state_flags { - GVE_PRIV_FLAGS_ADMIN_QUEUE_OK = BIT(1), - GVE_PRIV_FLAGS_DEVICE_RESOURCES_OK = BIT(2), - GVE_PRIV_FLAGS_DEVICE_RINGS_OK = BIT(3), - GVE_PRIV_FLAGS_NAPI_ENABLED = BIT(4), +enum gve_ethtool_flags_bit { + GVE_PRIV_FLAGS_REPORT_STATS = 0, }; static inline bool gve_get_do_reset(struct gve_priv *priv) @@ -297,6 +321,22 @@ static inline void gve_clear_probe_in_progress(struct gve_priv *priv) clear_bit(GVE_PRIV_FLAGS_PROBE_IN_PROGRESS, &priv->service_task_flags); } +static inline bool gve_get_do_report_stats(struct gve_priv *priv) +{ + return test_bit(GVE_PRIV_FLAGS_DO_REPORT_STATS, + &priv->service_task_flags); +} + +static inline void gve_set_do_report_stats(struct gve_priv *priv) +{ + set_bit(GVE_PRIV_FLAGS_DO_REPORT_STATS, &priv->service_task_flags); +} + +static inline void gve_clear_do_report_stats(struct gve_priv *priv) +{ + clear_bit(GVE_PRIV_FLAGS_DO_REPORT_STATS, &priv->service_task_flags); +} + static inline bool gve_get_admin_queue_ok(struct gve_priv *priv) { return test_bit(GVE_PRIV_FLAGS_ADMIN_QUEUE_OK, &priv->state_flags); @@ -357,6 +397,21 @@ static inline void gve_clear_napi_enabled(struct gve_priv *priv) clear_bit(GVE_PRIV_FLAGS_NAPI_ENABLED, &priv->state_flags); } +static inline bool gve_get_report_stats(struct gve_priv *priv) +{ + return test_bit(GVE_PRIV_FLAGS_REPORT_STATS, &priv->ethtool_flags); +} + +static inline void gve_set_report_stats(struct gve_priv *priv) +{ + set_bit(GVE_PRIV_FLAGS_REPORT_STATS, &priv->ethtool_flags); +} + +static inline void gve_clear_report_stats(struct gve_priv *priv) +{ + clear_bit(GVE_PRIV_FLAGS_REPORT_STATS, &priv->ethtool_flags); +} + /* Returns the address of the ntfy_blocks irq doorbell */ static inline __be32 __iomem *gve_irq_doorbell(struct gve_priv *priv, @@ -478,6 +533,8 @@ int gve_reset(struct gve_priv *priv, bool attempt_teardown); int gve_adjust_queues(struct gve_priv *priv, struct gve_queue_config new_rx_config, struct gve_queue_config new_tx_config); +/* report stats handling */ +void gve_handle_report_stats(struct gve_priv *priv); /* exported by ethtool.c */ extern const struct ethtool_ops gve_ethtool_ops; /* needed by ethtool */ diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 6a93fe8e6a2b..38e0c3066035 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -35,6 +35,7 @@ int gve_adminq_alloc(struct device *dev, struct gve_priv *priv) priv->adminq_destroy_rx_queue_cnt = 0; priv->adminq_dcfg_device_resources_cnt = 0; priv->adminq_set_driver_parameter_cnt = 0; + priv->adminq_report_stats_cnt = 0; /* Setup Admin queue with the device */ iowrite32be(priv->adminq_bus_addr / PAGE_SIZE, @@ -183,6 +184,9 @@ int gve_adminq_execute_cmd(struct gve_priv *priv, case GVE_ADMINQ_SET_DRIVER_PARAMETER: priv->adminq_set_driver_parameter_cnt++; break; + case GVE_ADMINQ_REPORT_STATS: + priv->adminq_report_stats_cnt++; + break; default: dev_err(&priv->pdev->dev, "unknown AQ command opcode %d\n", opcode); } @@ -433,3 +437,20 @@ int gve_adminq_set_mtu(struct gve_priv *priv, u64 mtu) return gve_adminq_execute_cmd(priv, &cmd); } + +int gve_adminq_report_stats(struct gve_priv *priv, u64 stats_report_len, + dma_addr_t stats_report_addr, u64 interval) +{ + union gve_adminq_command cmd; + + memset(&cmd, 0, sizeof(cmd)); + cmd.opcode = cpu_to_be32(GVE_ADMINQ_REPORT_STATS); + cmd.report_stats = (struct gve_adminq_report_stats) { + .stats_report_len = cpu_to_be64(stats_report_len), + .stats_report_addr = cpu_to_be64(stats_report_addr), + .interval = cpu_to_be64(interval), + }; + + return gve_adminq_execute_cmd(priv, &cmd); +} + diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h index 4dfa06edc0f8..a6c8c29f0d13 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -21,6 +21,7 @@ enum gve_adminq_opcodes { GVE_ADMINQ_DESTROY_RX_QUEUE = 0x8, GVE_ADMINQ_DECONFIGURE_DEVICE_RESOURCES = 0x9, GVE_ADMINQ_SET_DRIVER_PARAMETER = 0xB, + GVE_ADMINQ_REPORT_STATS = 0xC, }; /* Admin queue status codes */ @@ -172,6 +173,45 @@ struct gve_adminq_set_driver_parameter { static_assert(sizeof(struct gve_adminq_set_driver_parameter) == 16); +struct gve_adminq_report_stats { + __be64 stats_report_len; + __be64 stats_report_addr; + __be64 interval; +}; + +static_assert(sizeof(struct gve_adminq_report_stats) == 24); + +struct stats { + __be32 stat_name; + __be32 queue_id; + __be64 value; +}; + +static_assert(sizeof(struct stats) == 16); + +struct gve_stats_report { + __be64 written_count; + struct stats stats[0]; +}; + +static_assert(sizeof(struct gve_stats_report) == 8); + +enum gve_stat_names { + // stats from gve + TX_WAKE_CNT = 1, + TX_STOP_CNT = 2, + TX_FRAMES_SENT = 3, + TX_BYTES_SENT = 4, + TX_LAST_COMPLETION_PROCESSED = 5, + RX_NEXT_EXPECTED_SEQUENCE = 6, + RX_BUFFERS_POSTED = 7, + // stats from NIC + RX_QUEUE_DROP_CNT = 65, + RX_NO_BUFFERS_POSTED = 66, + RX_DROPS_PACKET_OVER_MRU = 67, + RX_DROPS_INVALID_CHECKSUM = 68, +}; + union gve_adminq_command { struct { __be32 opcode; @@ -187,6 +227,7 @@ union gve_adminq_command { struct gve_adminq_register_page_list reg_page_list; struct gve_adminq_unregister_page_list unreg_page_list; struct gve_adminq_set_driver_parameter set_driver_param; + struct gve_adminq_report_stats report_stats; }; }; u8 reserved[64]; @@ -214,4 +255,6 @@ int gve_adminq_register_page_list(struct gve_priv *priv, struct gve_queue_page_list *qpl); int gve_adminq_unregister_page_list(struct gve_priv *priv, u32 page_list_id); int gve_adminq_set_mtu(struct gve_priv *priv, u64 mtu); +int gve_adminq_report_stats(struct gve_priv *priv, u64 stats_report_len, + dma_addr_t stats_report_addr, u64 interval); #endif /* _GVE_ADMINQ_H */ diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index ebbf2c4c444e..9d778e04b9f4 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -6,6 +6,7 @@ #include #include "gve.h" +#include "gve_adminq.h" static void gve_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *info) @@ -42,6 +43,8 @@ static const char gve_gstrings_main_stats[][ETH_GSTRING_LEN] = { static const char gve_gstrings_rx_stats[][ETH_GSTRING_LEN] = { "rx_posted_desc[%u]", "rx_completed_desc[%u]", "rx_bytes[%u]", "rx_dropped_pkt[%u]", "rx_copybreak_pkt[%u]", "rx_copied_pkt[%u]", + "rx_queue_drop_cnt[%u]", "rx_no_buffers_posted[%u]", + "rx_drops_packet_over_mru[%u]", "rx_drops_invalid_checksum[%u]", }; static const char gve_gstrings_tx_stats[][ETH_GSTRING_LEN] = { @@ -56,12 +59,18 @@ static const char gve_gstrings_adminq_stats[][ETH_GSTRING_LEN] = { "adminq_create_tx_queue_cnt", "adminq_create_rx_queue_cnt", "adminq_destroy_tx_queue_cnt", "adminq_destroy_rx_queue_cnt", "adminq_dcfg_device_resources_cnt", "adminq_set_driver_parameter_cnt", + "adminq_report_stats_cnt", +}; + +static const char gve_gstrings_priv_flags[][ETH_GSTRING_LEN] = { + "report-stats", }; #define GVE_MAIN_STATS_LEN ARRAY_SIZE(gve_gstrings_main_stats) #define GVE_ADMINQ_STATS_LEN ARRAY_SIZE(gve_gstrings_adminq_stats) #define NUM_GVE_TX_CNTS ARRAY_SIZE(gve_gstrings_tx_stats) #define NUM_GVE_RX_CNTS ARRAY_SIZE(gve_gstrings_rx_stats) +#define GVE_PRIV_FLAGS_STR_LEN ARRAY_SIZE(gve_gstrings_priv_flags) static void gve_get_strings(struct net_device *netdev, u32 stringset, u8 *data) { @@ -69,30 +78,42 @@ static void gve_get_strings(struct net_device *netdev, u32 stringset, u8 *data) char *s = (char *)data; int i, j; - if (stringset != ETH_SS_STATS) - return; - - memcpy(s, *gve_gstrings_main_stats, - sizeof(gve_gstrings_main_stats)); - s += sizeof(gve_gstrings_main_stats); - - for (i = 0; i < priv->rx_cfg.num_queues; i++) { - for (j = 0; j < NUM_GVE_RX_CNTS; j++) { - snprintf(s, ETH_GSTRING_LEN, gve_gstrings_rx_stats[j], i); - s += ETH_GSTRING_LEN; + switch (stringset) { + case ETH_SS_STATS: + memcpy(s, *gve_gstrings_main_stats, + sizeof(gve_gstrings_main_stats)); + s += sizeof(gve_gstrings_main_stats); + + for (i = 0; i < priv->rx_cfg.num_queues; i++) { + for (j = 0; j < NUM_GVE_RX_CNTS; j++) { + snprintf(s, ETH_GSTRING_LEN, + gve_gstrings_rx_stats[j], i); + s += ETH_GSTRING_LEN; + } } - } - for (i = 0; i < priv->tx_cfg.num_queues; i++) { - for (j = 0; j < NUM_GVE_TX_CNTS; j++) { - snprintf(s, ETH_GSTRING_LEN, gve_gstrings_tx_stats[j], i); - s += ETH_GSTRING_LEN; + for (i = 0; i < priv->tx_cfg.num_queues; i++) { + for (j = 0; j < NUM_GVE_TX_CNTS; j++) { + snprintf(s, ETH_GSTRING_LEN, + gve_gstrings_tx_stats[j], i); + s += ETH_GSTRING_LEN; + } } - } - memcpy(s, *gve_gstrings_adminq_stats, - sizeof(gve_gstrings_adminq_stats)); - s += sizeof(gve_gstrings_adminq_stats); + memcpy(s, *gve_gstrings_adminq_stats, + sizeof(gve_gstrings_adminq_stats)); + s += sizeof(gve_gstrings_adminq_stats); + break; + + case ETH_SS_PRIV_FLAGS: + memcpy(s, *gve_gstrings_priv_flags, + sizeof(gve_gstrings_priv_flags)); + s += sizeof(gve_gstrings_priv_flags); + break; + + default: + break; + } } static int gve_get_sset_count(struct net_device *netdev, int sset) @@ -104,6 +125,8 @@ static int gve_get_sset_count(struct net_device *netdev, int sset) return GVE_MAIN_STATS_LEN + GVE_ADMINQ_STATS_LEN + (priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS) + (priv->tx_cfg.num_queues * NUM_GVE_TX_CNTS); + case ETH_SS_PRIV_FLAGS: + return GVE_PRIV_FLAGS_STR_LEN; default: return -EOPNOTSUPP; } @@ -113,18 +136,36 @@ static void gve_get_ethtool_stats(struct net_device *netdev, struct ethtool_stats *stats, u64 *data) { + u64 rx_pkts, rx_bytes, rx_skb_alloc_fail, rx_buf_alloc_fail, + rx_desc_err_dropped_pkt, tx_pkts, + tx_bytes; u64 tmp_rx_pkts, tmp_rx_bytes, tmp_rx_skb_alloc_fail, tmp_rx_buf_alloc_fail, tmp_rx_desc_err_dropped_pkt, tmp_tx_pkts, tmp_tx_bytes; - u64 rx_pkts, rx_bytes, rx_skb_alloc_fail, rx_buf_alloc_fail, - rx_desc_err_dropped_pkt, tx_pkts, tx_bytes; + int stats_idx, base_stats_idx, max_stats_idx; struct gve_priv *priv = netdev_priv(netdev); + struct stats *report_stats; + int *rx_qid_to_stats_idx; + int *tx_qid_to_stats_idx; + bool skip_nic_stats; unsigned int start; int ring; - int i; + int i, j; ASSERT_RTNL(); + report_stats = priv->stats_report->stats; + rx_qid_to_stats_idx = kmalloc_array(priv->rx_cfg.num_queues, + sizeof(int), GFP_KERNEL); + if (!rx_qid_to_stats_idx) + return; + tx_qid_to_stats_idx = kmalloc_array(priv->tx_cfg.num_queues, + sizeof(int), GFP_KERNEL); + if (!tx_qid_to_stats_idx) { + kfree(rx_qid_to_stats_idx); + return; + } + for (rx_pkts = 0, rx_bytes = 0, rx_skb_alloc_fail = 0, rx_buf_alloc_fail = 0, rx_desc_err_dropped_pkt = 0, ring = 0; ring < priv->rx_cfg.num_queues; ring++) { @@ -185,6 +226,25 @@ gve_get_ethtool_stats(struct net_device *netdev, data[i++] = priv->dma_mapping_error; i = GVE_MAIN_STATS_LEN; + /* For rx cross-reporting stats, start from nic rx stats in report */ + base_stats_idx = GVE_TX_STATS_REPORT_NUM * priv->tx_cfg.num_queues + + GVE_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues; + max_stats_idx = NIC_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues + + base_stats_idx; + /* Preprocess the stats report for rx, map queue id to start index */ + skip_nic_stats = false; + for (stats_idx = base_stats_idx; stats_idx < max_stats_idx; + stats_idx += NIC_RX_STATS_REPORT_NUM) { + u32 stat_name = be32_to_cpu(report_stats[stats_idx].stat_name); + u32 queue_id = be32_to_cpu(report_stats[stats_idx].queue_id); + + if (stat_name == 0) { + /* no stats written by NIC yet */ + skip_nic_stats = true; + break; + } + rx_qid_to_stats_idx[queue_id] = stats_idx; + } /* walk RX rings */ if (priv->rx) { for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) { @@ -209,10 +269,41 @@ gve_get_ethtool_stats(struct net_device *netdev, tmp_rx_desc_err_dropped_pkt; data[i++] = rx->rx_copybreak_pkt; data[i++] = rx->rx_copied_pkt; + /* stats from NIC */ + if (skip_nic_stats) { + /* skip NIC rx stats */ + i += NIC_RX_STATS_REPORT_NUM; + continue; + } + for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) { + u64 value = + be64_to_cpu(report_stats[rx_qid_to_stats_idx[ring] + j].value); + + data[i++] = value; + } } } else { i += priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS; } + + /* For tx cross-reporting stats, start from nic tx stats in report */ + base_stats_idx = max_stats_idx; + max_stats_idx = NIC_TX_STATS_REPORT_NUM * priv->tx_cfg.num_queues + + max_stats_idx; + /* Preprocess the stats report for tx, map queue id to start index */ + skip_nic_stats = false; + for (stats_idx = base_stats_idx; stats_idx < max_stats_idx; + stats_idx += NIC_TX_STATS_REPORT_NUM) { + u32 stat_name = be32_to_cpu(report_stats[stats_idx].stat_name); + u32 queue_id = be32_to_cpu(report_stats[stats_idx].queue_id); + + if (stat_name == 0) { + /* no stats written by NIC yet */ + skip_nic_stats = true; + break; + } + tx_qid_to_stats_idx[queue_id] = stats_idx; + } /* walk TX rings */ if (priv->tx) { for (ring = 0; ring < priv->tx_cfg.num_queues; ring++) { @@ -231,10 +322,26 @@ gve_get_ethtool_stats(struct net_device *netdev, data[i++] = tx->stop_queue; data[i++] = be32_to_cpu(gve_tx_load_event_counter(priv, tx)); + /* stats from NIC */ + if (skip_nic_stats) { + /* skip NIC tx stats */ + i += NIC_TX_STATS_REPORT_NUM; + continue; + } + for (j = 0; j < NIC_TX_STATS_REPORT_NUM; j++) { + u64 value = + be64_to_cpu(report_stats[tx_qid_to_stats_idx[ring] + j].value); + + data[i++] = value; + } } } else { i += priv->tx_cfg.num_queues * NUM_GVE_TX_CNTS; } + + kfree(rx_qid_to_stats_idx); + kfree(tx_qid_to_stats_idx); + /* AQ Stats */ data[i++] = priv->adminq_prod_cnt; data[i++] = priv->adminq_cmd_fail; @@ -249,6 +356,7 @@ gve_get_ethtool_stats(struct net_device *netdev, data[i++] = priv->adminq_destroy_rx_queue_cnt; data[i++] = priv->adminq_dcfg_device_resources_cnt; data[i++] = priv->adminq_set_driver_parameter_cnt; + data[i++] = priv->adminq_report_stats_cnt; } static void gve_get_channels(struct net_device *netdev, @@ -353,6 +461,54 @@ static int gve_set_tunable(struct net_device *netdev, } } +static u32 gve_get_priv_flags(struct net_device *netdev) +{ + struct gve_priv *priv = netdev_priv(netdev); + u32 i, ret_flags = 0; + + for (i = 0; i < GVE_PRIV_FLAGS_STR_LEN; i++) { + if (priv->ethtool_flags & BIT(i)) + ret_flags |= BIT(i); + } + return ret_flags; +} + +static int gve_set_priv_flags(struct net_device *netdev, u32 flags) +{ + struct gve_priv *priv = netdev_priv(netdev); + u64 ori_flags, new_flags; + u32 i; + + ori_flags = READ_ONCE(priv->ethtool_flags); + new_flags = ori_flags; + + for (i = 0; i < GVE_PRIV_FLAGS_STR_LEN; i++) { + if (flags & BIT(i)) + new_flags |= BIT(i); + else + new_flags &= ~(BIT(i)); + priv->ethtool_flags = new_flags; + /* set report-stats */ + if (strcmp(gve_gstrings_priv_flags[i], "report-stats") == 0) { + /* update the stats when user turns report-stats on */ + if (flags & BIT(i)) + gve_handle_report_stats(priv); + /* zero off gve stats when report-stats turned off */ + if (!(flags & BIT(i)) && (ori_flags & BIT(i))) { + int tx_stats_num = GVE_TX_STATS_REPORT_NUM * + priv->tx_cfg.num_queues; + int rx_stats_num = GVE_RX_STATS_REPORT_NUM * + priv->rx_cfg.num_queues; + memset(priv->stats_report->stats, 0, + (tx_stats_num + rx_stats_num) * + sizeof(struct stats)); + } + } + } + + return 0; +} + const struct ethtool_ops gve_ethtool_ops = { .get_drvinfo = gve_get_drvinfo, .get_strings = gve_get_strings, @@ -367,4 +523,6 @@ const struct ethtool_ops gve_ethtool_ops = { .reset = gve_user_reset, .get_tunable = gve_get_tunable, .set_tunable = gve_set_tunable, + .get_priv_flags = gve_get_priv_flags, + .set_priv_flags = gve_set_priv_flags, }; diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 449345d24f29..099d61f00bf0 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -78,6 +78,59 @@ static void gve_free_counter_array(struct gve_priv *priv) priv->counter_array = NULL; } +static void gve_service_task_schedule(struct gve_priv *priv) +{ + if (!gve_get_probe_in_progress(priv) && + !gve_get_reset_in_progress(priv)) { + gve_set_do_report_stats(priv); + queue_work(priv->gve_wq, &priv->service_task); + } +} + +static void gve_service_timer(struct timer_list *t) +{ + struct gve_priv *priv = from_timer(priv, t, service_timer); + + mod_timer(&priv->service_timer, + round_jiffies(jiffies + + msecs_to_jiffies(priv->service_timer_period))); + gve_service_task_schedule(priv); +} + +static int gve_alloc_stats_report(struct gve_priv *priv) +{ + int tx_stats_num, rx_stats_num; + + tx_stats_num = (GVE_TX_STATS_REPORT_NUM + NIC_TX_STATS_REPORT_NUM) * + priv->tx_cfg.num_queues; + rx_stats_num = (GVE_RX_STATS_REPORT_NUM + NIC_RX_STATS_REPORT_NUM) * + priv->rx_cfg.num_queues; + priv->stats_report_len = sizeof(struct gve_stats_report) + + (tx_stats_num + rx_stats_num) * + sizeof(struct stats); + priv->stats_report = + dma_alloc_coherent(&priv->pdev->dev, priv->stats_report_len, + &priv->stats_report_bus, GFP_KERNEL); + if (!priv->stats_report) + return -ENOMEM; + /* Set up timer for periodic task */ + timer_setup(&priv->service_timer, gve_service_timer, 0); + priv->service_timer_period = GVE_SERVICE_TIMER_PERIOD; + /* Start the service task timer */ + mod_timer(&priv->service_timer, + round_jiffies(jiffies + + msecs_to_jiffies(priv->service_timer_period))); + return 0; +} + +static void gve_free_stats_report(struct gve_priv *priv) +{ + del_timer_sync(&priv->service_timer); + dma_free_coherent(&priv->pdev->dev, priv->stats_report_len, + priv->stats_report, priv->stats_report_bus); + priv->stats_report = NULL; +} + static irqreturn_t gve_mgmnt_intr(int irq, void *arg) { struct gve_priv *priv = arg; @@ -270,6 +323,9 @@ static int gve_setup_device_resources(struct gve_priv *priv) err = gve_alloc_notify_blocks(priv); if (err) goto abort_with_counter; + err = gve_alloc_stats_report(priv); + if (err) + goto abort_with_ntfy_blocks; err = gve_adminq_configure_device_resources(priv, priv->counter_array_bus, priv->num_event_counters, @@ -279,10 +335,18 @@ static int gve_setup_device_resources(struct gve_priv *priv) dev_err(&priv->pdev->dev, "could not setup device_resources: err=%d\n", err); err = -ENXIO; - goto abort_with_ntfy_blocks; + goto abort_with_stats_report; } + err = gve_adminq_report_stats(priv, priv->stats_report_len, + priv->stats_report_bus, + GVE_SERVICE_TIMER_PERIOD); + if (err) + dev_err(&priv->pdev->dev, + "Failed to report stats: err=%d\n", err); gve_set_device_resources_ok(priv); return 0; +abort_with_stats_report: + gve_free_stats_report(priv); abort_with_ntfy_blocks: gve_free_notify_blocks(priv); abort_with_counter: @@ -298,6 +362,13 @@ static void gve_teardown_device_resources(struct gve_priv *priv) /* Tell device its resources are being freed */ if (gve_get_device_resources_ok(priv)) { + /* detach the stats report */ + err = gve_adminq_report_stats(priv, 0, 0x0, GVE_SERVICE_TIMER_PERIOD); + if (err) { + dev_err(&priv->pdev->dev, + "Failed to detach stats report: err=%d\n", err); + gve_trigger_reset(priv); + } err = gve_adminq_deconfigure_device_resources(priv); if (err) { dev_err(&priv->pdev->dev, @@ -308,6 +379,7 @@ static void gve_teardown_device_resources(struct gve_priv *priv) } gve_free_counter_array(priv); gve_free_notify_blocks(priv); + gve_free_stats_report(priv); gve_clear_device_resources_ok(priv); } @@ -830,6 +902,7 @@ static void gve_turndown(struct gve_priv *priv) netif_tx_disable(priv->dev); gve_clear_napi_enabled(priv); + gve_clear_report_stats(priv); } static void gve_turnup(struct gve_priv *priv) @@ -880,6 +953,10 @@ static void gve_handle_status(struct gve_priv *priv, u32 status) dev_info(&priv->pdev->dev, "Device requested reset.\n"); gve_set_do_reset(priv); } + if (GVE_DEVICE_STATUS_REPORT_STATS_MASK & status) { + dev_info(&priv->pdev->dev, "Device report stats on.\n"); + gve_set_do_report_stats(priv); + } } static void gve_handle_reset(struct gve_priv *priv) @@ -898,7 +975,68 @@ static void gve_handle_reset(struct gve_priv *priv) } } -/* Handle NIC status register changes and reset requests */ +void gve_handle_report_stats(struct gve_priv *priv) +{ + int idx, stats_idx = 0, tx_bytes; + unsigned int start = 0; + struct stats *stats = priv->stats_report->stats; + + if (!gve_get_report_stats(priv)) + return; + + be64_add_cpu(&priv->stats_report->written_count, 1); + /* tx stats */ + if (priv->tx) { + for (idx = 0; idx < priv->tx_cfg.num_queues; idx++) { + do { + start = u64_stats_fetch_begin(&priv->tx[idx].statss); + tx_bytes = priv->tx[idx].bytes_done; + } while (u64_stats_fetch_retry(&priv->tx[idx].statss, start)); + stats[stats_idx++] = (struct stats) { + .stat_name = cpu_to_be32(TX_WAKE_CNT), + .value = cpu_to_be64(priv->tx[idx].wake_queue), + .queue_id = cpu_to_be32(idx), + }; + stats[stats_idx++] = (struct stats) { + .stat_name = cpu_to_be32(TX_STOP_CNT), + .value = cpu_to_be64(priv->tx[idx].stop_queue), + .queue_id = cpu_to_be32(idx), + }; + stats[stats_idx++] = (struct stats) { + .stat_name = cpu_to_be32(TX_FRAMES_SENT), + .value = cpu_to_be64(priv->tx[idx].req), + .queue_id = cpu_to_be32(idx), + }; + stats[stats_idx++] = (struct stats) { + .stat_name = cpu_to_be32(TX_BYTES_SENT), + .value = cpu_to_be64(tx_bytes), + .queue_id = cpu_to_be32(idx), + }; + stats[stats_idx++] = (struct stats) { + .stat_name = cpu_to_be32(TX_LAST_COMPLETION_PROCESSED), + .value = cpu_to_be64(priv->tx[idx].done), + .queue_id = cpu_to_be32(idx), + }; + } + } + /* rx stats */ + if (priv->rx) { + for (idx = 0; idx < priv->rx_cfg.num_queues; idx++) { + stats[stats_idx++] = (struct stats) { + .stat_name = cpu_to_be32(RX_NEXT_EXPECTED_SEQUENCE), + .value = cpu_to_be64(priv->rx[idx].desc.seqno), + .queue_id = cpu_to_be32(idx), + }; + stats[stats_idx++] = (struct stats) { + .stat_name = cpu_to_be32(RX_BUFFERS_POSTED), + .value = cpu_to_be64(priv->rx[0].fill_cnt), + .queue_id = cpu_to_be32(idx), + }; + } + } +} + +/* Handle NIC status register changes, reset requests and report stats */ static void gve_service_task(struct work_struct *work) { struct gve_priv *priv = container_of(work, struct gve_priv, @@ -908,6 +1046,10 @@ static void gve_service_task(struct work_struct *work) ioread32be(&priv->reg_bar0->device_status)); gve_handle_reset(priv); + if (gve_get_do_report_stats(priv)) { + gve_handle_report_stats(priv); + gve_clear_do_report_stats(priv); + } } static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device) @@ -1173,6 +1315,7 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent) priv->db_bar2 = db_bar; priv->service_task_flags = 0x0; priv->state_flags = 0x0; + priv->ethtool_flags = 0x0; priv->dma_mask = dma_mask; gve_set_probe_in_progress(priv); diff --git a/drivers/net/ethernet/google/gve/gve_register.h b/drivers/net/ethernet/google/gve/gve_register.h index fad8813d1bb1..776c2911842a 100644 --- a/drivers/net/ethernet/google/gve/gve_register.h +++ b/drivers/net/ethernet/google/gve/gve_register.h @@ -24,5 +24,6 @@ struct gve_registers { enum gve_device_status_flags { GVE_DEVICE_STATUS_RESET_MASK = BIT(1), GVE_DEVICE_STATUS_LINK_STATUS_MASK = BIT(2), + GVE_DEVICE_STATUS_REPORT_STATS_MASK = BIT(3), }; #endif /* _GVE_REGISTER_H_ */ From patchwork Tue Aug 18 19:44:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 262344 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FEE9C433E1 for ; Tue, 18 Aug 2020 19:44:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4DD3A206B5 for ; Tue, 18 Aug 2020 19:44:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AJFYyc2Y" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726809AbgHRTo6 (ORCPT ); Tue, 18 Aug 2020 15:44:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726752AbgHRTot (ORCPT ); Tue, 18 Aug 2020 15:44:49 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 408C6C061342 for ; Tue, 18 Aug 2020 12:44:48 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id e12so23333651ybc.18 for ; Tue, 18 Aug 2020 12:44:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RWjqrTdIak+5ou+pVThsfOSG1kAHjXujv1R5A1DCKfg=; b=AJFYyc2YJMTzUid6wCbi+iwoJ1er3R0zjqS/IDwYZWuLSoVO5lHY/m3AfF9Z6PmuTL 8TzmH82r1FvsWPFBQ4RYztL/x5UoPusjZ9g5g2ZWG1vjeuoan++hfvK74b5lU3PPbxaH 8dV2nMVFwrQitrMHp8MB1PRTbZCxcxwLtmQdQlQg1Q9VNYedgqf6oUuiuOq8s0xrR5ag sIGabENCYnZ5zSB+x4eGdIhMIwTbRWhhjb629Ts7lVQJ4udrxJP44rLQSAZEnqM6ulRv lMSP8x+IBa7fXzlLaqoYO/Kwyybuq/vOiTc0SmeBLDeGD7tk3XVnc6vugsYHea0njRXu XSQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RWjqrTdIak+5ou+pVThsfOSG1kAHjXujv1R5A1DCKfg=; b=SDYQQ/GMa+5bWXQId8r7NJyEP5u2LFjfXvT4Azl1l8WANsBr5IybsZIww1IQMRsy/T Qizuk7JOLpvDx+dLRVksnHhaBUFdm2I45EYfTJYEwTOoRD1HEr+oY55mUl45LLS9fSrY cxQW0lq93uHqMe5WBembsSll140RA//KV/8AF+GjY+kQ8YYFOO4UBTLRtbmvqBZZhIBU FY4A0/s1pIPfm1p3hiB6JUVCB0RVEba3AMku8YnSQNCAdy7o8SHuc/o3CSVsSYuWf/SY C6FtAO+AGp+zAUhhBjH3lzmHLWRNYq3yikn0cxnZ/Irl870z3tkoIfo9+/cKxJDbQxRr UN2w== X-Gm-Message-State: AOAM531Ng+qa2E5gsSkKMuQ9/I1kC+HgZivm9e/7pIKkrRhgdYV3/Z0f VzxqIbyD6CXdKDEO3TWklZxfw73P49CrUOAXzACdHcdnRJ2NTCiotVFUbusUPxwJQxi34DCf9ds 5izu5C7S2i0qKBQW/Uo8Yu9EJYn1Oh5Z2imIRzvlREOUw/dDn+puA8/Ddm7/WNqfy89dlAgNd X-Google-Smtp-Source: ABdhPJw5tZ8i8LVXRbREcPT5RqmsNgLxcm1+xnazWOCutczIUodLOOGEE/21lYuyAp+aX0hRu2HFcCxWNwTs3+h1 X-Received: by 2002:a25:5094:: with SMTP id e142mr28418608ybb.99.1597779886988; Tue, 18 Aug 2020 12:44:46 -0700 (PDT) Date: Tue, 18 Aug 2020 12:44:06 -0700 In-Reply-To: <20200818194417.2003932-1-awogbemila@google.com> Message-Id: <20200818194417.2003932-8-awogbemila@google.com> Mime-Version: 1.0 References: <20200818194417.2003932-1-awogbemila@google.com> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog Subject: [PATCH net-next 07/18] gve: Use link status register to report link status From: David Awogbemila To: netdev@vger.kernel.org Cc: Patricio Noyola , Yangchun Fu , David Awogbemila Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Patricio Noyola This makes the driver better aware of the connectivity status of the device. Based on the device's status register, the driver can call netif_carrier_{on,off}. Reviewed-by: Yangchun Fu Signed-off-by: Patricio Noyola Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve_main.c | 23 +++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 885943d745aa..685f18697fc1 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -769,7 +769,7 @@ static int gve_open(struct net_device *dev) gve_set_device_rings_ok(priv); gve_turnup(priv); - netif_carrier_on(dev); + queue_work(priv->gve_wq, &priv->service_task); priv->interface_up_cnt++; return 0; @@ -1026,16 +1026,33 @@ void gve_handle_report_stats(struct gve_priv *priv) } } +static void gve_handle_link_status(struct gve_priv *priv, bool link_status) +{ + if (!gve_get_napi_enabled(priv)) + return; + + if (link_status == netif_carrier_ok(priv->dev)) + return; + + if (link_status) { + netif_carrier_on(priv->dev); + } else { + dev_info(&priv->pdev->dev, "Device link is down.\n"); + netif_carrier_off(priv->dev); + } +} + /* Handle NIC status register changes, reset requests and report stats */ static void gve_service_task(struct work_struct *work) { struct gve_priv *priv = container_of(work, struct gve_priv, service_task); + u32 status = ioread32be(&priv->reg_bar0->device_status); - gve_handle_status(priv, - ioread32be(&priv->reg_bar0->device_status)); + gve_handle_status(priv, status); gve_handle_reset(priv); + gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status); if (gve_get_do_report_stats(priv)) { gve_handle_report_stats(priv); gve_clear_do_report_stats(priv); From patchwork Tue Aug 18 19:44:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 262339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F4ADC433E1 for ; Tue, 18 Aug 2020 19:46:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3C01F20772 for ; Tue, 18 Aug 2020 19:46:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="jHAPxQxG" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726943AbgHRTqD (ORCPT ); Tue, 18 Aug 2020 15:46:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726763AbgHRTov (ORCPT ); Tue, 18 Aug 2020 15:44:51 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9CC50C061389 for ; Tue, 18 Aug 2020 12:44:51 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id f5so13565261pfe.2 for ; Tue, 18 Aug 2020 12:44:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=NWdyNvsC5gPH455/7zlQJ4yPgaW+4MtM61fOXZENfYI=; b=jHAPxQxGdtHPcEjR+anN3sEjeI3+XbwGlk/rmjQDuflOqRChCP0Kn6lH3mPZtVEGIQ YdYdGynTeoolLIWbwnp6Elc3N4v990K+YH/UJSyZy1lO95QZ7Fapi2DnyvsTbxQNSkPs pQ/K/g1ozU3ee7tqydSO7LH7ghsyqBP7kIhdHxOLKxTCKR8bd3hx1NR3gU7WLiC1OqaF wR6PT5OydJkqCzBAbLbUSSAlmZnYt+UM9SieMr+h60r6rC1JDd0vQBzJMrew3KtIZ7Iv IZ/6Fy9wV8MEp2+MjYOxWzyMDy38uQHsXEtumNpQnJhZRv40gn3H0IlTXqVQfdCS04Pj ygOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NWdyNvsC5gPH455/7zlQJ4yPgaW+4MtM61fOXZENfYI=; b=DX73tJrWiED54OLbXRYwlNe0JH73ZInfK8mwE+hk0KU2zKUuFLSPg/CBysDpqBj8CZ t5L1I6275UpMJbHzNcdyRZP215HO10m04JhHgRvXzLETAGiCzdGhJFeoy/5b41ABfy9R en/DiWU961NxMovdS+qAGNEBwEldy8mLZiUuQCliAP1qbFP1NhUN/vaikluSz/5+E/9I KVcZ9gOMKV/aiibbB0FLey2nOde8VF5nYaHxfFsgSPbCmyFtCFsR6u/qtpBZCi3TM2Y5 7kRM1V8A64tfJqcVV7QqZ2eKiNipRMzlyzVe8vmoErArQbMjCJrMmO8FahDKpSgNq+hP f+6g== X-Gm-Message-State: AOAM5313w/MCa2x3f7oKvyusFky/BMa2xzraJXHGg9M3KVQTXnePkKXQ aj02YPQOcb/wWTAkLf5VUFjKDA2Xtq1g481lNPnjccbL1iVgqTeHNKf/1hrHEhQHStVnvq7LkNf 2CohStjql6J7eJ/Ah/FRz1TLVdDGs5OPiaRiaYx/C3jLQy7e512EJh9Ax6x1qElLo+Il7Xqba X-Google-Smtp-Source: ABdhPJwJorN+tuQ805EX2+PCx5musL5ZS2H98VVf8SLY5wN0uUt1G8rSfn1gd91HynWst7Xl76j7x7HBOb3GvLl+ X-Received: from awogbemila.sea.corp.google.com ([2620:15c:100:202:1ea0:b8ff:fe73:6cc0]) (user=awogbemila job=sendgmr) by 2002:a17:90a:ccd:: with SMTP id 13mr1165138pjt.123.1597779890733; Tue, 18 Aug 2020 12:44:50 -0700 (PDT) Date: Tue, 18 Aug 2020 12:44:07 -0700 In-Reply-To: <20200818194417.2003932-1-awogbemila@google.com> Message-Id: <20200818194417.2003932-9-awogbemila@google.com> Mime-Version: 1.0 References: <20200818194417.2003932-1-awogbemila@google.com> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog Subject: [PATCH net-next 08/18] gve: Enable Link Speed Reporting in the driver. From: David Awogbemila To: netdev@vger.kernel.org Cc: David Awogbemila , Yangchun Fu Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This change allows the driver to report the device link speed when the ethtool command: ethtool is run. Getting the link speed is done via a new admin queue command: ReportLinkSpeed. Reviewed-by: Yangchun Fu Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 3 +++ drivers/net/ethernet/google/gve/gve_adminq.c | 27 +++++++++++++++++++ drivers/net/ethernet/google/gve/gve_adminq.h | 9 +++++++ drivers/net/ethernet/google/gve/gve_ethtool.c | 11 ++++++++ 4 files changed, 50 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index bc54059f9b2e..0f5871af36af 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -252,6 +252,9 @@ struct gve_priv { unsigned long service_timer_period; struct timer_list service_timer; + /* Gvnic device link speed from hypervisor. */ + u64 link_speed; + /* Gvnic device's dma mask, set during probe. */ u8 dma_mask; }; diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 023e6f804972..cf5eaab1cb83 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -596,3 +596,30 @@ int gve_adminq_report_stats(struct gve_priv *priv, u64 stats_report_len, return gve_adminq_execute_cmd(priv, &cmd); } +int gve_adminq_report_link_speed(struct gve_priv *priv) +{ + union gve_adminq_command gvnic_cmd; + dma_addr_t link_speed_region_bus; + u64 *link_speed_region; + int err; + + link_speed_region = + dma_alloc_coherent(&priv->pdev->dev, sizeof(*link_speed_region), + &link_speed_region_bus, GFP_KERNEL); + + if (!link_speed_region) + return -ENOMEM; + + memset(&gvnic_cmd, 0, sizeof(gvnic_cmd)); + gvnic_cmd.opcode = cpu_to_be32(GVE_ADMINQ_REPORT_LINK_SPEED); + gvnic_cmd.report_link_speed.link_speed_address = + cpu_to_be64(link_speed_region_bus); + + err = gve_adminq_execute_cmd(priv, &gvnic_cmd); + + priv->link_speed = be64_to_cpu(*link_speed_region); + dma_free_coherent(&priv->pdev->dev, sizeof(*link_speed_region), link_speed_region, + link_speed_region_bus); + return err; +} + diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h index 784830f75b7c..281de8326bc5 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -22,6 +22,7 @@ enum gve_adminq_opcodes { GVE_ADMINQ_DECONFIGURE_DEVICE_RESOURCES = 0x9, GVE_ADMINQ_SET_DRIVER_PARAMETER = 0xB, GVE_ADMINQ_REPORT_STATS = 0xC, + GVE_ADMINQ_REPORT_LINK_SPEED = 0xD }; /* Admin queue status codes */ @@ -181,6 +182,12 @@ struct gve_adminq_report_stats { static_assert(sizeof(struct gve_adminq_report_stats) == 24); +struct gve_adminq_report_link_speed { + __be64 link_speed_address; +}; + +static_assert(sizeof(struct gve_adminq_report_link_speed) == 8); + struct stats { __be32 stat_name; __be32 queue_id; @@ -228,6 +235,7 @@ union gve_adminq_command { struct gve_adminq_unregister_page_list unreg_page_list; struct gve_adminq_set_driver_parameter set_driver_param; struct gve_adminq_report_stats report_stats; + struct gve_adminq_report_link_speed report_link_speed; }; }; u8 reserved[64]; @@ -255,4 +263,5 @@ int gve_adminq_unregister_page_list(struct gve_priv *priv, u32 page_list_id); int gve_adminq_set_mtu(struct gve_priv *priv, u64 mtu); int gve_adminq_report_stats(struct gve_priv *priv, u64 stats_report_len, dma_addr_t stats_report_addr, u64 interval); +int gve_adminq_report_link_speed(struct gve_priv *priv); #endif /* _GVE_ADMINQ_H */ diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index 9d778e04b9f4..8bc8383558d8 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -509,6 +509,16 @@ static int gve_set_priv_flags(struct net_device *netdev, u32 flags) return 0; } +static int gve_get_link_ksettings(struct net_device *netdev, + struct ethtool_link_ksettings *cmd) +{ + struct gve_priv *priv = netdev_priv(netdev); + int err = gve_adminq_report_link_speed(priv); + + cmd->base.speed = priv->link_speed; + return err; +} + const struct ethtool_ops gve_ethtool_ops = { .get_drvinfo = gve_get_drvinfo, .get_strings = gve_get_strings, @@ -525,4 +535,5 @@ const struct ethtool_ops gve_ethtool_ops = { .set_tunable = gve_set_tunable, .get_priv_flags = gve_get_priv_flags, .set_priv_flags = gve_set_priv_flags, + .get_link_ksettings = gve_get_link_ksettings }; From patchwork Tue Aug 18 19:44:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 262343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02BECC433E3 for ; Tue, 18 Aug 2020 19:45:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C484520772 for ; Tue, 18 Aug 2020 19:45:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rYLnAPpp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726846AbgHRTpS (ORCPT ); Tue, 18 Aug 2020 15:45:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726807AbgHRTpD (ORCPT ); Tue, 18 Aug 2020 15:45:03 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37C05C061389 for ; Tue, 18 Aug 2020 12:45:03 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id z16so12724559pgh.21 for ; Tue, 18 Aug 2020 12:45:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=naa4s1V3J2M7x6U252tUXdbGP+c9iE6SNkZNIrgjFCQ=; b=rYLnAPppGhPMoN2nSIfs2PYRF+yr0PyRexIP6exBcYawH+61Xwi3AnK9gfVMTWh1Qx dU28AgIOKhiEWhMfv7HngrDcw3L3EYTG0fEfmNKyKNwKNCtMQfhOVd8QFwCyFAKMzeUh LTLmga+qW168yTapdDXrPEsN5MYJa1LK+Bb319QKJwF7rFONOSvjRqd4FSgpT+MYPmap d+NQJMfczWNpaEKYbxfZ1PY7XD1Si8iFWwMgwBEQDbTf/kT1ebLqEwQdZWLaeAF5DuWo QTh5o025HZ+xdp2M2sV7BUQBECurw+uuZHpgoh/+3CHKpGr7rIeY6nM6sOI10qOpyp+m sAqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=naa4s1V3J2M7x6U252tUXdbGP+c9iE6SNkZNIrgjFCQ=; b=JwssNw6blLHMryKjJ/CNJQDyUeFc3KHxu7a8ncDO8rKZxkaKMxfz6hMMEVzBXDMiS3 MghQ/6jbhEa4vGCCgUErIf4Bw7V9YH5036blfDKK2w5NPtrobVNbPQZbuWOIzo7ikBbI d0U7Tkle9Qp/o4tSy3qgXAcng+4arAEsYDYOcZELlqM8VvjLKXkcnUWqpliBHsMr+WBl ubmWwXwbnW8NRISzY02Oi4P37a/A+yjPlJVF2iq5YLGnq8ySs3dCq9b376Ql84qJHDDP q9W8ofCQ2oZ/pSrvL6xYJcQMbVT4fbBS5RnmjWVMm4keY6chcX+mOYTtgRstgsOP52zx 7J0w== X-Gm-Message-State: AOAM533QPjTGLihZrO12/Shd3TWgHfp/AeAhwwsT8m6xhEccSO9PZMSY Bamzv2w3CAR/Fgf0ZDJ55haXKGmyu/Yqm4kuFosvom8VQ9G3GDJ5koHAXJ4mpeItgjoRLU5mt0/ 4pXDhp7fge+Lp8cnRvPIYzNek3KBuuY4/uh30JAw9pbjItMUGcA3LS5H31MKfvZ3/KLsihpHT X-Google-Smtp-Source: ABdhPJxbfSD+pkJB7S1MTE1eaY8t+Y74bLPR+k/ALlmMgAghZltWxZ7F3Oum39sDGUXcuybQNcC1oNKeJwp1jYNT X-Received: by 2002:a63:544e:: with SMTP id e14mr14130441pgm.90.1597779902534; Tue, 18 Aug 2020 12:45:02 -0700 (PDT) Date: Tue, 18 Aug 2020 12:44:09 -0700 In-Reply-To: <20200818194417.2003932-1-awogbemila@google.com> Message-Id: <20200818194417.2003932-11-awogbemila@google.com> Mime-Version: 1.0 References: <20200818194417.2003932-1-awogbemila@google.com> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog Subject: [PATCH net-next 10/18] gve: Add support for raw addressing to the rx path From: David Awogbemila To: netdev@vger.kernel.org Cc: Catherine Sullivan , Yangchun Fu , David Awogbemila Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Catherine Sullivan Add support to use raw dma addresses in the rx path. Due to this new support we can alloc a new buffer instead of making a copy. Reviewed-by: Yangchun Fu Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 23 +- drivers/net/ethernet/google/gve/gve_adminq.c | 15 +- drivers/net/ethernet/google/gve/gve_desc.h | 10 +- drivers/net/ethernet/google/gve/gve_main.c | 9 +- drivers/net/ethernet/google/gve/gve_rx.c | 344 ++++++++++++++----- 5 files changed, 296 insertions(+), 105 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 50565516d1fe..c86cec163bd6 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -50,6 +50,7 @@ struct gve_rx_slot_page_info { struct page *page; void *page_address; u32 page_offset; /* offset to write to in page */ + bool can_flip; /* page can be flipped and reused */ }; /* A list of pages registered with the device during setup and used by a queue @@ -68,6 +69,7 @@ struct gve_rx_data_queue { dma_addr_t data_bus; /* dma mapping of the slots */ struct gve_rx_slot_page_info *page_info; /* page info of the buffers */ struct gve_queue_page_list *qpl; /* qpl assigned to this queue */ + bool raw_addressing; /* use raw_addressing? */ }; struct gve_priv; @@ -82,11 +84,14 @@ struct gve_rx_ring { u32 cnt; /* free-running total number of completed packets */ u32 fill_cnt; /* free-running total number of descs and buffs posted */ u32 mask; /* masks the cnt and fill_cnt to the size of the ring */ + u32 db_threshold; /* threshold for posting new buffs and descs */ u64 rx_copybreak_pkt; /* free-running count of copybreak packets */ u64 rx_copied_pkt; /* free-running total number of copied packets */ u64 rx_skb_alloc_fail; /* free-running count of skb alloc fails */ u64 rx_buf_alloc_fail; /* free-running count of buffer alloc fails */ u64 rx_desc_err_dropped_pkt; /* free-running count of packets dropped by descriptor error */ + /* free-running count of packets dropped because of lack of buffer refill */ + u64 rx_no_refill_dropped_pkt; u32 q_num; /* queue index */ u32 ntfy_id; /* notification block index */ struct gve_queue_resources *q_resources; /* head and tail pointer idx */ @@ -194,7 +199,7 @@ struct gve_priv { u16 tx_desc_cnt; /* num desc per ring */ u16 rx_desc_cnt; /* num desc per ring */ u16 tx_pages_per_qpl; /* tx buffer length */ - u16 rx_pages_per_qpl; /* rx buffer length */ + u16 rx_data_slot_cnt; /* rx buffer length */ u64 max_registered_pages; u64 num_registered_pages; /* num pages registered with NIC */ u32 rx_copybreak; /* copy packets smaller than this */ @@ -449,7 +454,10 @@ static inline u32 gve_num_tx_qpls(struct gve_priv *priv) */ static inline u32 gve_num_rx_qpls(struct gve_priv *priv) { - return priv->rx_cfg.num_queues; + if (priv->raw_addressing) + return 0; + else + return priv->rx_cfg.num_queues; } /* Returns a pointer to the next available tx qpl in the list of qpls @@ -503,18 +511,9 @@ static inline enum dma_data_direction gve_qpl_dma_dir(struct gve_priv *priv, return DMA_FROM_DEVICE; } -/* Returns true if the max mtu allows page recycling */ -static inline bool gve_can_recycle_pages(struct net_device *dev) -{ - /* We can't recycle the pages if we can't fit a packet into half a - * page. - */ - return dev->max_mtu <= PAGE_SIZE / 2; -} - /* buffers */ int gve_alloc_page(struct gve_priv *priv, struct device *dev, struct page **page, dma_addr_t *dma, - enum dma_data_direction); + enum dma_data_direction, gfp_t gfp_flags); void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma, enum dma_data_direction); /* tx handling */ diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index cf5d172bec83..bb21891d06a2 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -353,8 +353,11 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) { struct gve_rx_ring *rx = &priv->rx[queue_index]; union gve_adminq_command cmd; + u32 qpl_id; int err; + qpl_id = priv->raw_addressing ? GVE_RAW_ADDRESSING_QPL_ID : + rx->data.qpl->id; memset(&cmd, 0, sizeof(cmd)); cmd.opcode = cpu_to_be32(GVE_ADMINQ_CREATE_RX_QUEUE); cmd.create_rx_queue = (struct gve_adminq_create_rx_queue) { @@ -365,7 +368,7 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) .queue_resources_addr = cpu_to_be64(rx->q_resources_bus), .rx_desc_ring_addr = cpu_to_be64(rx->desc.bus), .rx_data_ring_addr = cpu_to_be64(rx->data.data_bus), - .queue_page_list_id = cpu_to_be32(rx->data.qpl->id), + .queue_page_list_id = cpu_to_be32(qpl_id), }; err = gve_adminq_issue_cmd(priv, &cmd); @@ -510,11 +513,11 @@ int gve_adminq_describe_device(struct gve_priv *priv) mac = descriptor->mac; dev_info(&priv->pdev->dev, "MAC addr: %pM\n", mac); priv->tx_pages_per_qpl = be16_to_cpu(descriptor->tx_pages_per_qpl); - priv->rx_pages_per_qpl = be16_to_cpu(descriptor->rx_pages_per_qpl); - if (priv->rx_pages_per_qpl < priv->rx_desc_cnt) { - dev_err(&priv->pdev->dev, "rx_pages_per_qpl cannot be smaller than rx_desc_cnt, setting rx_desc_cnt down to %d.\n", - priv->rx_pages_per_qpl); - priv->rx_desc_cnt = priv->rx_pages_per_qpl; + priv->rx_data_slot_cnt = be16_to_cpu(descriptor->rx_pages_per_qpl); + if (priv->rx_data_slot_cnt < priv->rx_desc_cnt) { + dev_err(&priv->pdev->dev, "rx_data_slot_cnt cannot be smaller than rx_desc_cnt, setting rx_desc_cnt down to %d.\n", + priv->rx_data_slot_cnt); + priv->rx_desc_cnt = priv->rx_data_slot_cnt; } priv->default_num_queues = be16_to_cpu(descriptor->default_num_queues); dev_opt = (struct gve_device_option *)((void *)descriptor + diff --git a/drivers/net/ethernet/google/gve/gve_desc.h b/drivers/net/ethernet/google/gve/gve_desc.h index 54779871d52e..0aad314aefaf 100644 --- a/drivers/net/ethernet/google/gve/gve_desc.h +++ b/drivers/net/ethernet/google/gve/gve_desc.h @@ -72,12 +72,14 @@ struct gve_rx_desc { } __packed; static_assert(sizeof(struct gve_rx_desc) == 64); -/* As with the Tx ring format, the qpl_offset entries below are offsets into an - * ordered list of registered pages. +/* If the device supports raw dma addressing then the addr in data slot is + * the dma address of the buffer. + * If the device only supports registered segments than the addr is a byte + * offset into the registered segment (an ordered list of pages) where the + * buffer is. */ struct gve_rx_data_slot { - /* byte offset into the rx registered segment of this slot */ - __be64 qpl_offset; + __be64 addr; }; /* GVE Recive Packet Descriptor Seq No */ diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 3e9b69747370..de22c60d1fea 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -578,10 +578,8 @@ static void gve_free_rings(struct gve_priv *priv) int gve_alloc_page(struct gve_priv *priv, struct device *dev, struct page **page, dma_addr_t *dma, - enum dma_data_direction dir) + enum dma_data_direction dir, gfp_t gfp_flags) { - gfp_t gfp_flags = GFP_KERNEL; - if (priv->dma_mask == 24) gfp_flags |= GFP_DMA; else if (priv->dma_mask == 32) @@ -596,6 +594,7 @@ int gve_alloc_page(struct gve_priv *priv, struct device *dev, if (dma_mapping_error(dev, *dma)) { priv->dma_mapping_error++; put_page(*page); + *page = NULL; return -ENOMEM; } return 0; @@ -631,7 +630,7 @@ static int gve_alloc_queue_page_list(struct gve_priv *priv, u32 id, for (i = 0; i < pages; i++) { err = gve_alloc_page(priv, &priv->pdev->dev, &qpl->pages[i], &qpl->page_buses[i], - gve_qpl_dma_dir(priv, id)); + gve_qpl_dma_dir(priv, id), GFP_KERNEL); /* caller handles clean up */ if (err) return -ENOMEM; @@ -694,7 +693,7 @@ static int gve_alloc_qpls(struct gve_priv *priv) } for (; i < num_qpls; i++) { err = gve_alloc_queue_page_list(priv, i, - priv->rx_pages_per_qpl); + priv->rx_data_slot_cnt); if (err) goto free_qpls; } diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 008fa897a3e6..ca12f267d08a 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -16,12 +16,22 @@ static void gve_rx_remove_from_block(struct gve_priv *priv, int queue_idx) block->rx = NULL; } +static void gve_rx_free_buffer(struct device *dev, + struct gve_rx_slot_page_info *page_info, + struct gve_rx_data_slot *data_slot) +{ + dma_addr_t dma = (dma_addr_t)(be64_to_cpu(data_slot->addr) - + page_info->page_offset); + + gve_free_page(dev, page_info->page, dma, DMA_FROM_DEVICE); +} + static void gve_rx_free_ring(struct gve_priv *priv, int idx) { struct gve_rx_ring *rx = &priv->rx[idx]; struct device *dev = &priv->pdev->dev; size_t bytes; - u32 slots; + u32 slots = rx->mask + 1; gve_rx_remove_from_block(priv, idx); @@ -33,11 +43,18 @@ static void gve_rx_free_ring(struct gve_priv *priv, int idx) rx->q_resources, rx->q_resources_bus); rx->q_resources = NULL; - gve_unassign_qpl(priv, rx->data.qpl->id); - rx->data.qpl = NULL; + if (rx->data.raw_addressing) { + int i; + + for (i = 0; i < slots; i++) + gve_rx_free_buffer(dev, &rx->data.page_info[i], + &rx->data.data_ring[i]); + } else { + gve_unassign_qpl(priv, rx->data.qpl->id); + rx->data.qpl = NULL; + } kvfree(rx->data.page_info); - slots = rx->mask + 1; bytes = sizeof(*rx->data.data_ring) * slots; dma_free_coherent(dev, bytes, rx->data.data_ring, rx->data.data_bus); @@ -52,13 +69,14 @@ static void gve_setup_rx_buffer(struct gve_rx_slot_page_info *page_info, page_info->page = page; page_info->page_offset = 0; page_info->page_address = page_address(page); - slot->qpl_offset = cpu_to_be64(addr); + slot->addr = cpu_to_be64(addr); } static int gve_prefill_rx_pages(struct gve_rx_ring *rx) { struct gve_priv *priv = rx->gve; u32 slots; + int err; int i; /* Allocate one page per Rx queue slot. Each page is split into two @@ -71,12 +89,31 @@ static int gve_prefill_rx_pages(struct gve_rx_ring *rx) if (!rx->data.page_info) return -ENOMEM; - rx->data.qpl = gve_assign_rx_qpl(priv); - + if (!rx->data.raw_addressing) + rx->data.qpl = gve_assign_rx_qpl(priv); for (i = 0; i < slots; i++) { - struct page *page = rx->data.qpl->pages[i]; - dma_addr_t addr = i * PAGE_SIZE; + struct page *page; + dma_addr_t addr; + + if (rx->data.raw_addressing) { + err = gve_alloc_page(priv, &priv->pdev->dev, &page, + &addr, DMA_FROM_DEVICE, + GFP_KERNEL); + if (err) { + int j; + u64_stats_update_begin(&rx->statss); + rx->rx_buf_alloc_fail++; + u64_stats_update_end(&rx->statss); + for (j = 0; j < i; j++) + gve_free_page(&priv->pdev->dev, page, + addr, DMA_FROM_DEVICE); + return err; + } + } else { + page = rx->data.qpl->pages[i]; + addr = i * PAGE_SIZE; + } gve_setup_rx_buffer(&rx->data.page_info[i], &rx->data.data_ring[i], addr, page); } @@ -110,8 +147,9 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) rx->gve = priv; rx->q_num = idx; - slots = priv->rx_pages_per_qpl; + slots = priv->rx_data_slot_cnt; rx->mask = slots - 1; + rx->data.raw_addressing = priv->raw_addressing; /* alloc rx data ring */ bytes = sizeof(*rx->data.data_ring) * slots; @@ -156,8 +194,8 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) err = -ENOMEM; goto abort_with_q_resources; } - rx->mask = slots - 1; rx->cnt = 0; + rx->db_threshold = priv->rx_desc_cnt / 2; rx->desc.seqno = 1; gve_rx_add_to_block(priv, idx); @@ -225,8 +263,7 @@ static enum pkt_hash_types gve_rss_type(__be16 pkt_flags) return PKT_HASH_TYPE_L2; } -static struct sk_buff *gve_rx_copy(struct gve_rx_ring *rx, - struct net_device *dev, +static struct sk_buff *gve_rx_copy(struct net_device *dev, struct napi_struct *napi, struct gve_rx_slot_page_info *page_info, u16 len) @@ -244,15 +281,10 @@ static struct sk_buff *gve_rx_copy(struct gve_rx_ring *rx, skb->protocol = eth_type_trans(skb, dev); - u64_stats_update_begin(&rx->statss); - rx->rx_copied_pkt++; - u64_stats_update_end(&rx->statss); - return skb; } -static struct sk_buff *gve_rx_add_frags(struct net_device *dev, - struct napi_struct *napi, +static struct sk_buff *gve_rx_add_frags(struct napi_struct *napi, struct gve_rx_slot_page_info *page_info, u16 len) { @@ -268,14 +300,118 @@ static struct sk_buff *gve_rx_add_frags(struct net_device *dev, return skb; } -static void gve_rx_flip_buff(struct gve_rx_slot_page_info *page_info, - struct gve_rx_data_slot *data_ring) +static int gve_rx_alloc_buffer(struct gve_priv *priv, struct device *dev, + struct gve_rx_slot_page_info *page_info, + struct gve_rx_data_slot *data_slot, + struct gve_rx_ring *rx) +{ + struct page *page; + dma_addr_t dma; + int err; + + err = gve_alloc_page(priv, dev, &page, &dma, DMA_FROM_DEVICE, GFP_ATOMIC); + if (err) { + u64_stats_update_begin(&rx->statss); + rx->rx_buf_alloc_fail++; + u64_stats_update_end(&rx->statss); + return err; + } + + gve_setup_rx_buffer(page_info, data_slot, dma, page); + return 0; +} + +static void gve_rx_flip_buffer(struct gve_rx_slot_page_info *page_info, + struct gve_rx_data_slot *data_slot) { - u64 addr = be64_to_cpu(data_ring->qpl_offset); + u64 addr = be64_to_cpu(data_slot->addr); + /* "flip" to other packet buffer on this page */ page_info->page_offset ^= PAGE_SIZE / 2; addr ^= PAGE_SIZE / 2; - data_ring->qpl_offset = cpu_to_be64(addr); + data_slot->addr = cpu_to_be64(addr); +} + +static bool gve_rx_can_flip_buffers(struct net_device *netdev) +{ +#if PAGE_SIZE == 4096 + /* We can't flip a buffer if we can't fit a packet + * into half a page. + */ + if (netdev->max_mtu + GVE_RX_PAD + ETH_HLEN > PAGE_SIZE / 2) + return false; + return true; +#else + /* PAGE_SIZE != 4096 - don't try to reuse */ + return false; +#endif +} + +static int gve_rx_can_recycle_buffer(struct page *page) +{ + int pagecount = page_count(page); + + /* This page is not being used by any SKBs - reuse */ + if (pagecount == 1) { + return 1; + /* This page is still being used by an SKB - we can't reuse */ + } else if (pagecount >= 2) { + return 0; + } + WARN(pagecount < 1, "Pagecount should never be < 1"); + return -1; +} + +static struct sk_buff * +gve_rx_raw_addressing(struct device *dev, struct net_device *netdev, + struct gve_rx_slot_page_info *page_info, u16 len, + struct napi_struct *napi, + struct gve_rx_data_slot *data_slot, bool can_flip) +{ + struct sk_buff *skb = gve_rx_add_frags(napi, page_info, len); + + if (!skb) + return NULL; + + /* Optimistically stop the kernel from freeing the page by increasing + * the page bias. We will check the refcount in refill to determine if + * we need to alloc a new page. + */ + get_page(page_info->page); + page_info->can_flip = can_flip; + + return skb; +} + +static struct sk_buff * +gve_rx_qpl(struct device *dev, struct net_device *netdev, + struct gve_rx_ring *rx, struct gve_rx_slot_page_info *page_info, + u16 len, struct napi_struct *napi, + struct gve_rx_data_slot *data_slot, bool recycle) +{ + struct sk_buff *skb; + /* if raw_addressing mode is not enabled gvnic can only receive into + * registered segments. If the buffer can't be recycled, our only + * choice is to copy the data out of it so that we can return it to the + * device. + */ + if (recycle) { + skb = gve_rx_add_frags(napi, page_info, len); + /* No point in recycling if we didn't get the skb */ + if (skb) { + /* Make sure the networking stack can't free the page */ + get_page(page_info->page); + gve_rx_flip_buffer(page_info, data_slot); + } + } else { + skb = gve_rx_copy(netdev, napi, page_info, len); + if (skb) { + u64_stats_update_begin(&rx->statss); + rx->rx_copied_pkt++; + u64_stats_update_end(&rx->statss); + } + } + return skb; } static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, @@ -284,9 +420,10 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, struct gve_rx_slot_page_info *page_info; struct gve_priv *priv = rx->gve; struct napi_struct *napi = &priv->ntfy_blocks[rx->ntfy_id].napi; - struct net_device *dev = priv->dev; - struct sk_buff *skb; - int pagecount; + struct net_device *netdev = priv->dev; + struct gve_rx_data_slot *data_slot; + struct sk_buff *skb = NULL; + dma_addr_t page_bus; u16 len; /* drop this packet */ @@ -294,71 +431,56 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, u64_stats_update_begin(&rx->statss); rx->rx_desc_err_dropped_pkt++; u64_stats_update_end(&rx->statss); - return true; + return false; } len = be16_to_cpu(rx_desc->len) - GVE_RX_PAD; page_info = &rx->data.page_info[idx]; - dma_sync_single_for_cpu(&priv->pdev->dev, rx->data.qpl->page_buses[idx], - PAGE_SIZE, DMA_FROM_DEVICE); - /* gvnic can only receive into registered segments. If the buffer - * can't be recycled, our only choice is to copy the data out of - * it so that we can return it to the device. - */ + data_slot = &rx->data.data_ring[idx]; + page_bus = (rx->data.raw_addressing) ? + be64_to_cpu(data_slot->addr) - page_info->page_offset : + rx->data.qpl->page_buses[idx]; + dma_sync_single_for_cpu(&priv->pdev->dev, page_bus, + PAGE_SIZE, DMA_FROM_DEVICE); - if (PAGE_SIZE == 4096) { - if (len <= priv->rx_copybreak) { - /* Just copy small packets */ - skb = gve_rx_copy(rx, dev, napi, page_info, len); + if (len <= priv->rx_copybreak) { + /* Just copy small packets */ + skb = gve_rx_copy(netdev, napi, page_info, len); + if (skb) { u64_stats_update_begin(&rx->statss); + rx->rx_copied_pkt++; rx->rx_copybreak_pkt++; u64_stats_update_end(&rx->statss); - goto have_skb; } - if (unlikely(!gve_can_recycle_pages(dev))) { - skb = gve_rx_copy(rx, dev, napi, page_info, len); - goto have_skb; - } - pagecount = page_count(page_info->page); - if (pagecount == 1) { - /* No part of this page is used by any SKBs; we attach - * the page fragment to a new SKB and pass it up the - * stack. - */ - skb = gve_rx_add_frags(dev, napi, page_info, len); - if (!skb) { - u64_stats_update_begin(&rx->statss); - rx->rx_skb_alloc_fail++; - u64_stats_update_end(&rx->statss); - return true; + } else { + bool can_flip = gve_rx_can_flip_buffers(netdev); + int recycle = 0; + + if (can_flip) { + recycle = gve_rx_can_recycle_buffer(page_info->page); + if (recycle < 0) { + gve_schedule_reset(priv); + return false; } - /* Make sure the kernel stack can't release the page */ - get_page(page_info->page); - /* "flip" to other packet buffer on this page */ - gve_rx_flip_buff(page_info, &rx->data.data_ring[idx]); - } else if (pagecount >= 2) { - /* We have previously passed the other half of this - * page up the stack, but it has not yet been freed. - */ - skb = gve_rx_copy(rx, dev, napi, page_info, len); + } + if (rx->data.raw_addressing) { + skb = gve_rx_raw_addressing(&priv->pdev->dev, netdev, + page_info, len, napi, + data_slot, + can_flip && recycle); } else { - WARN(pagecount < 1, "Pagecount should never be < 1"); - return false; + skb = gve_rx_qpl(&priv->pdev->dev, netdev, rx, + page_info, len, napi, data_slot, + can_flip && recycle); } - } else { - skb = gve_rx_copy(rx, dev, napi, page_info, len); } -have_skb: - /* We didn't manage to allocate an skb but we haven't had any - * reset worthy failures. - */ if (!skb) { u64_stats_update_begin(&rx->statss); rx->rx_skb_alloc_fail++; u64_stats_update_end(&rx->statss); - return true; + return false; } if (likely(feat & NETIF_F_RXCSUM)) { @@ -380,6 +502,7 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, napi_gro_frags(napi); else napi_gro_receive(napi, skb); + return true; } @@ -399,19 +522,72 @@ static bool gve_rx_work_pending(struct gve_rx_ring *rx) return (GVE_SEQNO(flags_seq) == rx->desc.seqno); } +static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx) +{ + u32 fill_cnt = rx->fill_cnt; + + while ((fill_cnt & rx->mask) != (rx->cnt & rx->mask)) { + u32 idx = fill_cnt & rx->mask; + struct gve_rx_slot_page_info *page_info = + &rx->data.page_info[idx]; + + if (page_info->can_flip) { + /* The other half of the page is free because it was + * free when we processed the descriptor. Flip to it. + */ + struct gve_rx_data_slot *data_slot = + &rx->data.data_ring[idx]; + + gve_rx_flip_buffer(page_info, data_slot); + page_info->can_flip = false; + } else { + /* It is possible that the networking stack has already + * finished processing all outstanding packets in the buffer + * and it can be reused. + * Flipping is unceccessary here - if the networking stack still + * owns half the page it is impossible to tell which half. Either + * the whole page is free or it needs to be replaced. + */ + int recycle = gve_rx_can_recycle_buffer(page_info->page); + + if (recycle < 0) { + gve_schedule_reset(priv); + return false; + } + if (!recycle) { + /* We can't reuse the buffer - alloc a new one*/ + struct gve_rx_data_slot *data_slot = + &rx->data.data_ring[idx]; + struct device *dev = &priv->pdev->dev; + + gve_rx_free_buffer(dev, page_info, data_slot); + page_info->page = NULL; + if (gve_rx_alloc_buffer(priv, dev, page_info, + data_slot, rx)) { + break; + } + } + } + fill_cnt++; + } + rx->fill_cnt = fill_cnt; + return true; +} + bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, netdev_features_t feat) { struct gve_priv *priv = rx->gve; + u32 work_done = 0, packets = 0; struct gve_rx_desc *desc; u32 cnt = rx->cnt; u32 idx = cnt & rx->mask; - u32 work_done = 0; u64 bytes = 0; desc = rx->desc.desc_ring + idx; while ((GVE_SEQNO(desc->flags_seq) == rx->desc.seqno) && work_done < budget) { + bool dropped; netif_info(priv, rx_status, priv->dev, "[%d] idx=%d desc=%p desc->flags_seq=0x%x\n", rx->q_num, idx, desc, desc->flags_seq); @@ -419,9 +595,11 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, "[%d] seqno=%d rx->desc.seqno=%d\n", rx->q_num, GVE_SEQNO(desc->flags_seq), rx->desc.seqno); - bytes += be16_to_cpu(desc->len) - GVE_RX_PAD; - if (!gve_rx(rx, desc, feat, idx)) - gve_schedule_reset(priv); + dropped = !gve_rx(rx, desc, feat, idx); + if (!dropped) { + bytes += be16_to_cpu(desc->len) - GVE_RX_PAD; + packets++; + } cnt++; idx = cnt & rx->mask; desc = rx->desc.desc_ring + idx; @@ -433,11 +611,21 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, return false; u64_stats_update_begin(&rx->statss); - rx->rpackets += work_done; + rx->rpackets += packets; rx->rbytes += bytes; u64_stats_update_end(&rx->statss); rx->cnt = cnt; - rx->fill_cnt += work_done; + /* restock ring slots */ + if (!rx->data.raw_addressing) { + /* In QPL mode buffs are refilled as the desc are processed */ + rx->fill_cnt += work_done; + } else if (rx->fill_cnt - cnt <= rx->db_threshold) { + /* In raw addressing mode buffs are only refilled if the avail + * falls below a threshold. + */ + if (!gve_rx_refill_buffers(priv, rx)) + return false; + } gve_rx_write_doorbell(priv, rx); return gve_rx_work_pending(rx); From patchwork Tue Aug 18 19:44:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 262342 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69181C433DF for ; Tue, 18 Aug 2020 19:45:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4343720772 for ; Tue, 18 Aug 2020 19:45:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="X6/w59lw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726886AbgHRTpb (ORCPT ); Tue, 18 Aug 2020 15:45:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726790AbgHRTpH (ORCPT ); Tue, 18 Aug 2020 15:45:07 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6E63C061344 for ; Tue, 18 Aug 2020 12:45:06 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id 19so13561779pfu.20 for ; Tue, 18 Aug 2020 12:45:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=oZMmn4PHvUdTHMM4IFHnrX8/vyRXNZx/g7k6fnL4qoU=; b=X6/w59lwEh8yUIO0W3saAN78ZCcD2ydpd7W/kOSZiBJvR/vJoEN7iedoqSGmyLBGtM jiXFO36bjpXKrOyzTjEndE2RdIFtQiBJvw9/rshAOwN4uOgaJW2D+f9lcYMLJ3Bxnaoh 5gnkGKpcu0ha6nAcLOY1zD+EUiaaqC5E2EWzKv/DY1dhOWnd8wIyiuPXkDbu+t8L8suC ioUCRdGZwu/XGf9LRv+xDES/uN48Ecip+ttDh5tO7TuoXzHZYMvFjz9kH8RAXXU4vfhc i12eopgxDfcMAx164hT+EgcxX7x+qcPGo4rsYKnawJGy1AI8Nmz+LpViy70CHEp0/R0H g7tQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=oZMmn4PHvUdTHMM4IFHnrX8/vyRXNZx/g7k6fnL4qoU=; b=AzjE2+tw7Hvo609rgq+Y0z/u0YoVaOS1FTH+gZOkX20MP85wBopb9xlVAIQbKsmU5a c6Y14Fcd5LFGqGHUz2jgbndYSwSNYyFK0QL8TdQiMPDHNR/BID+d65nFEyg/GVwKHSHn 8Tgxb3iUvJezVGLmWe+ydIl19vAw83Gm06nd5b0goGPjTyAS+Lwo5T/xYNKsOu0I6qiM xzXzs8dbEEjN7BbjM7iF00nWKPoOo+kkKhiDc+a4aw1KFLAkMIBUsVhhZSK9bVexWaqd t4Ca6PLtq5tdd0KXGYe+zB7I74PYs8iTktPGjxi0C8repAYogJ/N9R00NWMyAnfz9B0U RhVw== X-Gm-Message-State: AOAM533PJpJwIqmSMKZpN3SqzF/ZeJpACkjmJjwkSzsWk3tYx4q+uJT1 vBd4Hs6fbnOh/vdRUvlR5fPxX7AcmOT2XN5AO1UyTrX41Xkzb1YYOUhQvm534LozI8o6TOzc7IR E0SZMY7G+K0icz4NJECQMLlX/mdNDMNbi/QWrH+PO6ajAaWDXBfUwcHkUJ38jR+iC5wHgjrHX X-Google-Smtp-Source: ABdhPJyzDg0y1YbsOii+jHAgrn8cZJjZX9qtkIWU0PgYuZGXsc1cBJYDlNasvrCSSEnJfR3sshJ9mGYYRgvjgaWB X-Received: from awogbemila.sea.corp.google.com ([2620:15c:100:202:1ea0:b8ff:fe73:6cc0]) (user=awogbemila job=sendgmr) by 2002:a17:90b:3509:: with SMTP id ls9mr1180832pjb.230.1597779906155; Tue, 18 Aug 2020 12:45:06 -0700 (PDT) Date: Tue, 18 Aug 2020 12:44:11 -0700 In-Reply-To: <20200818194417.2003932-1-awogbemila@google.com> Message-Id: <20200818194417.2003932-13-awogbemila@google.com> Mime-Version: 1.0 References: <20200818194417.2003932-1-awogbemila@google.com> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog Subject: [PATCH net-next 12/18] gve: Add netif_set_xps_queue call From: David Awogbemila To: netdev@vger.kernel.org Cc: Catherine Sullivan , Yangchun Fu , David Awogbemila Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Catherine Sullivan Configure XPS when adding tx queues to the notification blocks. Reviewed-by: Yangchun Fu Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve_tx.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index 01d26bdca9b1..5d75dec1c520 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -176,12 +176,16 @@ static void gve_tx_free_ring(struct gve_priv *priv, int idx) static void gve_tx_add_to_block(struct gve_priv *priv, int queue_idx) { + unsigned int active_cpus = min_t(int, priv->num_ntfy_blks / 2, + num_online_cpus()); int ntfy_idx = gve_tx_idx_to_ntfy(priv, queue_idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; struct gve_tx_ring *tx = &priv->tx[queue_idx]; block->tx = tx; tx->ntfy_id = ntfy_idx; + netif_set_xps_queue(priv->dev, get_cpu_mask(ntfy_idx % active_cpus), + queue_idx); } static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) From patchwork Tue Aug 18 19:44:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 262340 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EE56C433DF for ; Tue, 18 Aug 2020 19:45:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5B88B20772 for ; Tue, 18 Aug 2020 19:45:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="o3NG7sfn" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726903AbgHRTph (ORCPT ); Tue, 18 Aug 2020 15:45:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726836AbgHRTpM (ORCPT ); Tue, 18 Aug 2020 15:45:12 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2BB3C061345 for ; Tue, 18 Aug 2020 12:45:11 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id f5so13565739pfe.2 for ; Tue, 18 Aug 2020 12:45:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/pKScq4EUsF00dUnxiUm0K80d1FnyiFU5PkdPnp0jKk=; b=o3NG7sfni4T6vKko3R5e+dWL2lupYGtTCLlIbR2LJbiJs7PkAAXEcsmZhX4I3QQXXZ VLzob21oY4k1XevP8FS2u6sabD0ISyLeSHE32fUnB0daluKfGSSMAZPJsAuws9QeyFVY FmQxLsloBdL8Wl571bYK9kBmiHMRZKVEm9AVxKSc4BVThREvoIHUKUBhtjEW+1GszFOT YRPd0tR2BcFdxbI13aRVTzQYCLDGiyUGC5t3sNA96dsSeTCkNqrfC3aR9vIObEL1oC2L L6pYrXliLqfGY0T8nT90Mi8gqeJFSR6w5p76Xa6ni6DH9fdeE5SV1e/IOMbuS1NhReuq 8Btg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/pKScq4EUsF00dUnxiUm0K80d1FnyiFU5PkdPnp0jKk=; b=IwBVyGjLIktd8t+wcM7QfTnWFz8bedp/8m23YBH1rO3wBj8xP59ZouPYAwM2SK8hvF hD5s6HH+2aNmOBVrVjsLj/jD0lLjzSL/Ef4IeCGdIxIRRxDpQF3UfGJ5uT91mYGvP7P/ N7WMIMekbya3CLfwfCDCu0NpH6xEsGtTPxcQXP0RaDrPitEqIzyffD+GAp4HFIlk8YTA wrz+ZFwh/MuD2lmhGSuR4j9QuZSfk2oXVtFCwGUzH5OE63wTxUkZjoIKilvzXFiRKkwa 8UNIvZtLu7KMGOOPkdbyswSS1LL5dhz51rjSDqmX5POSJZ9HvjqQuX98vqCaHt5q66HR TCCw== X-Gm-Message-State: AOAM5331pli5wUO+eWffLNKccmDguhePryfikiKlFdL7oeTsXg/9AlJS LLdjs9C3vfXgwYu0R6mSmb2kHqoSvR1oOZ/RV0tIxOhNlXt7jt7ivFwTIOBwnTGu4AK9LDduSVR +fMh5Hg+TF2n/d3Ktdpw5NzZ4Mid2annIlfFbjGQSA5v+41e4TfV4CRjZkTDoJNolQdqmIZPC X-Google-Smtp-Source: ABdhPJziAOTVr0xCzrIdTlftt96PULqob6qKabNEZhjYue26W907HKbfZ8/jzPTeLEwtMPVwZHcJuKZKx1rkYeSh X-Received: by 2002:a17:902:8d82:: with SMTP id v2mr6943764plo.180.1597779911021; Tue, 18 Aug 2020 12:45:11 -0700 (PDT) Date: Tue, 18 Aug 2020 12:44:14 -0700 In-Reply-To: <20200818194417.2003932-1-awogbemila@google.com> Message-Id: <20200818194417.2003932-16-awogbemila@google.com> Mime-Version: 1.0 References: <20200818194417.2003932-1-awogbemila@google.com> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog Subject: [PATCH net-next 15/18] gve: Prefetch packet pages and packet descriptors. From: David Awogbemila To: netdev@vger.kernel.org Cc: Yangchun Fu , Nathan Lewis , David Awogbemila Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yangchun Fu Prefetching appears to help performance. Signed-off-by: Yangchun Fu Signed-off-by: Nathan Lewis Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve_rx.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index c65615b9e602..24bd556f488e 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -447,8 +447,18 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, struct gve_rx_data_slot *data_slot; struct sk_buff *skb = NULL; dma_addr_t page_bus; + void *va; u16 len; + /* Prefetch two packet pages ahead, we will need it soon. */ + page_info = &rx->data.page_info[(idx + 2) & rx->mask]; + va = page_info->page_address + GVE_RX_PAD + + page_info->page_offset; + + prefetch(page_info->page); // Kernel page struct. + prefetch(va); // Packet header. + prefetch(va + 64); // Next cacheline too. + /* drop this packet */ if (unlikely(rx_desc->flags_seq & GVE_RXF_ERR)) { u64_stats_update_begin(&rx->statss); @@ -618,6 +628,10 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, "[%d] seqno=%d rx->desc.seqno=%d\n", rx->q_num, GVE_SEQNO(desc->flags_seq), rx->desc.seqno); + + // prefetch two descriptors ahead + prefetch(rx->desc.desc_ring + ((cnt + 2) & rx->mask)); + dropped = !gve_rx(rx, desc, feat, idx); if (!dropped) { bytes += be16_to_cpu(desc->len) - GVE_RX_PAD; From patchwork Tue Aug 18 19:44:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 262341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3328C433E1 for ; Tue, 18 Aug 2020 19:45:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7D6E020772 for ; Tue, 18 Aug 2020 19:45:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="FXZvJi4L" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726909AbgHRTpj (ORCPT ); Tue, 18 Aug 2020 15:45:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726840AbgHRTpN (ORCPT ); Tue, 18 Aug 2020 15:45:13 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7114BC061346 for ; Tue, 18 Aug 2020 12:45:13 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id y13so12831478plr.1 for ; Tue, 18 Aug 2020 12:45:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=wuBO91UgNJzk0BjrCEUQYJPVEmRtko20sYkQpxIa0KE=; b=FXZvJi4L45Z/evWgFGevk5DBYxVvwDkYNUtArKaGsPSt98ROHmdSYxXgyUKnNFmUgX i140X+wgciNfygB8QEhw14/CnJOhakh2bjHSClvC8RfoYoWCLqZHYTDpqpyWURswRQxO qPAxyv5UOyRah6ubx+Qxk0PUwKWwISLGKrosYXZBhfVfghecxgvB7ZUPMdi3TXK8w3G2 rJQ7PY9XEE6riyLQ9TnENiaAyu4cY1hhKUxIJMWSIAfOBs+gUuIca5lWYiJW7mLzSnxe th7vM69JiJIATeTyMqoKG9e+z3li/cLkY+sSxQSdW6owaDDchRVuXEeyiPtjJluvpYgV ORng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wuBO91UgNJzk0BjrCEUQYJPVEmRtko20sYkQpxIa0KE=; b=hhOU4y6fnZ3NUyEWHl9alv3V2253P5mWqiVXDflh/QdkKa/8/joc3jQEmXtcEmS8w2 ywGKch954E0rNU1M+2Bk481qSehpi8+7roMrLoD7Y79f0zGDIFq14jzrN3n8VDDlV7kc K5ltLnOD1n8no9AoKuovKqAdvRvnqPbkw0XSy95188pM45BzgxHs+adAsD5+6TAyOdX3 ldr7Z+BVZEVD1loLMv6F/LqOUmcmICK5SA+GI2u41FiIYWBofGiU6gOjRNZkXZu188RL Rcn2hTs4CWfkEikz3pmmdD4d3gZ8Ue4+FoH/IkcNjmApKK4Lp3o6qyWSilq+W6cfSFKR MTXA== X-Gm-Message-State: AOAM533CrIVQ6/iC6yOW2KvHGP4pYA4p6XsSESE4rzTAX/NEzaS8oWvU aeKn3BWqea0PfrOvz6oz3ueOPBhUVOjBvQbm7TuTeuXpo3P8Y530CGcU1fdX5hDsytA1unNr1jq 4BDS0k5kXQd/iyDFNS4CI5ZTqJkobOvquq6WSADCfILBE4ABA538q5f/TWrs9oznY7chk8Ju9 X-Google-Smtp-Source: ABdhPJx/ZmpfCpd4ZN3a4tCM42RJ/yESB9kVgZ+zc/KRlfxEuUW7fuKYXzsPrnlXbAbYVO2l9NmLE9rVA/Cl6Qbq X-Received: from awogbemila.sea.corp.google.com ([2620:15c:100:202:1ea0:b8ff:fe73:6cc0]) (user=awogbemila job=sendgmr) by 2002:a17:90a:f014:: with SMTP id bt20mr369057pjb.0.1597779912604; Tue, 18 Aug 2020 12:45:12 -0700 (PDT) Date: Tue, 18 Aug 2020 12:44:15 -0700 In-Reply-To: <20200818194417.2003932-1-awogbemila@google.com> Message-Id: <20200818194417.2003932-17-awogbemila@google.com> Mime-Version: 1.0 References: <20200818194417.2003932-1-awogbemila@google.com> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog Subject: [PATCH net-next 16/18] gve: Also WARN for skb index equals num_queues. From: David Awogbemila To: netdev@vger.kernel.org Cc: David Awogbemila Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The WARN condition currently checks if the skb's queue_mapping is (strictly) greater than tx_cfg.num_queues. queue_mapping == tx_cfg.num_queues should also be invalid and trigger the WARNing. Use WARN_ON_ONCE instead otherwise the host will be stuck in syslog flood. Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve_tx.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index 5d75dec1c520..5beff4d4d2e3 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -602,8 +602,7 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev) struct gve_tx_ring *tx; int nsegs; - WARN(skb_get_queue_mapping(skb) > priv->tx_cfg.num_queues, - "skb queue index out of range"); + WARN_ON_ONCE(skb_get_queue_mapping(skb) >= priv->tx_cfg.num_queues); tx = &priv->tx[skb_get_queue_mapping(skb)]; if (unlikely(gve_maybe_stop_tx(tx, skb))) { /* We need to ring the txq doorbell -- we have stopped the Tx