From patchwork Tue Dec 8 18:02:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shay Agroskin X-Patchwork-Id: 340232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 705C1C433FE for ; Tue, 8 Dec 2020 18:04:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4116A23B4B for ; Tue, 8 Dec 2020 18:04:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730913AbgLHSEV (ORCPT ); Tue, 8 Dec 2020 13:04:21 -0500 Received: from smtp-fw-2101.amazon.com ([72.21.196.25]:18320 "EHLO smtp-fw-2101.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730859AbgLHSEV (ORCPT ); Tue, 8 Dec 2020 13:04:21 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1607450660; x=1638986660; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XVreMNAA58fNOe0MRjhKLDy/juCKyr9h7fIOv+0v4Qw=; b=SDAF0OVPxNPM3jN7OIMGw14qpBYDsd7OV3dEVdeT/e4YRculroTSnjbU hsR3Yy50N2R/wFmBteNbZmGwi2mZ05EDzmezGVJdVRXOYYYabo3I0JxF5 o157VKg8zfpQT/1NLBr3aKlc7LplbBsEI6w4Rhy1rE1mb3+OgXQyYiWiA Q=; X-IronPort-AV: E=Sophos;i="5.78,403,1599523200"; d="scan'208";a="67962154" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2a-53356bf6.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP; 08 Dec 2020 18:03:33 +0000 Received: from EX13D28EUC001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-2a-53356bf6.us-west-2.amazon.com (Postfix) with ESMTPS id 69176A20A3; Tue, 8 Dec 2020 18:03:32 +0000 (UTC) Received: from u68c7b5b1d2d758.ant.amazon.com (10.43.162.211) by EX13D28EUC001.ant.amazon.com (10.43.164.4) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 8 Dec 2020 18:03:22 +0000 From: Shay Agroskin To: Jakub Kicinski , CC: Shay Agroskin , David Woodhouse , Zorik Machulsky , Alexander Matushevsky , Bshara Saeed , Matt Wilson , Anthony Liguori , Nafea Bshara , Guy Tzalik , Netanel Belgazal , Ali Saidi , Benjamin Herrenschmidt , Arthur Kiyanovski , Samih Jubran , Noam Dagan , Alexander Duyck , Ido Segev , Igor Chauskin Subject: [PATCH net-next v5 3/9] net: ena: store values in their appropriate variables types Date: Tue, 8 Dec 2020 20:02:02 +0200 Message-ID: <20201208180208.26111-4-shayagr@amazon.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201208180208.26111-1-shayagr@amazon.com> References: <20201208180208.26111-1-shayagr@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.162.211] X-ClientProxiedBy: EX13D12UWC004.ant.amazon.com (10.43.162.182) To EX13D28EUC001.ant.amazon.com (10.43.164.4) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch changes some of the variables types to match the values they hold. These wrong types fail some of our static checkers that search for accidental conversions in our driver. Signed-off-by: Ido Segev Signed-off-by: Igor Chauskin Signed-off-by: Shay Agroskin --- drivers/net/ethernet/amazon/ena/ena_com.c | 15 +++++++-------- drivers/net/ethernet/amazon/ena/ena_com.h | 2 +- 2 files changed, 8 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c index e168edf3c930..02087d443e73 100644 --- a/drivers/net/ethernet/amazon/ena/ena_com.c +++ b/drivers/net/ethernet/amazon/ena/ena_com.c @@ -1360,16 +1360,15 @@ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue, comp_ctx = ena_com_submit_admin_cmd(admin_queue, cmd, cmd_size, comp, comp_size); if (IS_ERR(comp_ctx)) { - if (comp_ctx == ERR_PTR(-ENODEV)) + ret = PTR_ERR(comp_ctx); + if (ret == -ENODEV) netdev_dbg(admin_queue->ena_dev->net_device, - "Failed to submit command [%ld]\n", - PTR_ERR(comp_ctx)); + "Failed to submit command [%d]\n", ret); else netdev_err(admin_queue->ena_dev->net_device, - "Failed to submit command [%ld]\n", - PTR_ERR(comp_ctx)); + "Failed to submit command [%d]\n", ret); - return PTR_ERR(comp_ctx); + return ret; } ret = ena_com_wait_and_process_admin_cq(comp_ctx, admin_queue); @@ -1595,7 +1594,7 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag) int ena_com_get_dma_width(struct ena_com_dev *ena_dev) { u32 caps = ena_com_reg_bar_read32(ena_dev, ENA_REGS_CAPS_OFF); - int width; + u32 width; if (unlikely(caps == ENA_MMIO_READ_TIMEOUT)) { netdev_err(ena_dev->net_device, "Reg read timeout occurred\n"); @@ -2247,7 +2246,7 @@ int ena_com_get_dev_basic_stats(struct ena_com_dev *ena_dev, return ret; } -int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu) +int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, u32 mtu) { struct ena_com_admin_queue *admin_queue; struct ena_admin_set_feat_cmd cmd; diff --git a/drivers/net/ethernet/amazon/ena/ena_com.h b/drivers/net/ethernet/amazon/ena/ena_com.h index b0f76fb3b1d7..343caf41e709 100644 --- a/drivers/net/ethernet/amazon/ena/ena_com.h +++ b/drivers/net/ethernet/amazon/ena/ena_com.h @@ -605,7 +605,7 @@ int ena_com_get_eni_stats(struct ena_com_dev *ena_dev, * * @return: 0 on Success and negative value otherwise. */ -int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu); +int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, u32 mtu); /* ena_com_get_offload_settings - Retrieve the device offloads capabilities * @ena_dev: ENA communication layer struct From patchwork Tue Dec 8 18:02:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shay Agroskin X-Patchwork-Id: 340228 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D2A4C433FE for ; Tue, 8 Dec 2020 18:09:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4105A23B5D for ; Tue, 8 Dec 2020 18:09:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730495AbgLHSJn (ORCPT ); Tue, 8 Dec 2020 13:09:43 -0500 Received: from smtp-fw-9103.amazon.com ([207.171.188.200]:43894 "EHLO smtp-fw-9103.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728893AbgLHSJn (ORCPT ); Tue, 8 Dec 2020 13:09:43 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1607450982; x=1638986982; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ch0FH9QIiYJnDZCGr9RJnFAvrOwdMJykHVaG/+SHBW8=; b=CsGXAqf5mIzpyPf24PlfUqdU07Zhmc/rOsjItUE9flSVdPpNA3/qWsU3 SIqSNFE9wbaHJrxlgAhsOx8JElTFfsjfZGLRXP19jfZO7JOX53UUJzSmY e4XbLHFK+f++kuE6iFafrbLFw2U15K4E2ilnfNqSp1bneT8lZaw0ONsr+ A=; X-IronPort-AV: E=Sophos;i="5.78,403,1599523200"; d="scan'208";a="901492243" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2a-69849ee2.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9103.sea19.amazon.com with ESMTP; 08 Dec 2020 18:09:01 +0000 Received: from EX13D28EUC001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-2a-69849ee2.us-west-2.amazon.com (Postfix) with ESMTPS id 76E2DA1CE2; Tue, 8 Dec 2020 18:04:01 +0000 (UTC) Received: from u68c7b5b1d2d758.ant.amazon.com (10.43.162.144) by EX13D28EUC001.ant.amazon.com (10.43.164.4) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 8 Dec 2020 18:03:52 +0000 From: Shay Agroskin To: Jakub Kicinski , CC: Shay Agroskin , David Woodhouse , Zorik Machulsky , Alexander Matushevsky , Bshara Saeed , Matt Wilson , Anthony Liguori , Nafea Bshara , Guy Tzalik , Netanel Belgazal , Ali Saidi , Benjamin Herrenschmidt , Arthur Kiyanovski , Samih Jubran , Noam Dagan , Kuniyuki Iwashima Subject: [PATCH net-next v5 4/9] net: ena: fix coding style nits Date: Tue, 8 Dec 2020 20:02:03 +0200 Message-ID: <20201208180208.26111-5-shayagr@amazon.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201208180208.26111-1-shayagr@amazon.com> References: <20201208180208.26111-1-shayagr@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.162.144] X-ClientProxiedBy: EX13D07UWA003.ant.amazon.com (10.43.160.35) To EX13D28EUC001.ant.amazon.com (10.43.164.4) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This commit fixes two nits, but it does not generate any change to binary because of the optimization of gcc. - use `count` instead of `channels->combined_count` - change return type from `int` to `bool` Also add spaces and change macro order in OR assignment to make the code easier to read. Signed-off-by: Sameeh Jubran Signed-off-by: Kuniyuki Iwashima Signed-off-by: Shay Agroskin --- drivers/net/ethernet/amazon/ena/ena_eth_com.c | 5 +++-- drivers/net/ethernet/amazon/ena/ena_ethtool.c | 2 +- drivers/net/ethernet/amazon/ena/ena_netdev.h | 4 ++-- 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.c b/drivers/net/ethernet/amazon/ena/ena_eth_com.c index 85daeac219ec..c3be751e7379 100644 --- a/drivers/net/ethernet/amazon/ena/ena_eth_com.c +++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.c @@ -578,6 +578,7 @@ int ena_com_rx_pkt(struct ena_com_io_cq *io_cq, ena_com_rx_set_flags(io_cq, ena_rx_ctx, cdesc); ena_rx_ctx->descs = nb_hw_desc; + return 0; } @@ -602,8 +603,8 @@ int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq, desc->ctrl = ENA_ETH_IO_RX_DESC_FIRST_MASK | ENA_ETH_IO_RX_DESC_LAST_MASK | - (io_sq->phase & ENA_ETH_IO_RX_DESC_PHASE_MASK) | - ENA_ETH_IO_RX_DESC_COMP_REQ_MASK; + ENA_ETH_IO_RX_DESC_COMP_REQ_MASK | + (io_sq->phase & ENA_ETH_IO_RX_DESC_PHASE_MASK); desc->req_id = req_id; diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c index 6cdd9efe8df3..2ad44ae74cf6 100644 --- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c +++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c @@ -839,7 +839,7 @@ static int ena_set_channels(struct net_device *netdev, /* The check for max value is already done in ethtool */ if (count < ENA_MIN_NUM_IO_QUEUES || (ena_xdp_present(adapter) && - !ena_xdp_legal_queue_count(adapter, channels->combined_count))) + !ena_xdp_legal_queue_count(adapter, count))) return -EINVAL; return ena_update_queue_count(adapter, count); diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h index 30eb686749dc..c39f41711c31 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.h +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h @@ -433,8 +433,8 @@ static inline bool ena_xdp_present_ring(struct ena_ring *ring) return !!ring->xdp_bpf_prog; } -static inline int ena_xdp_legal_queue_count(struct ena_adapter *adapter, - u32 queues) +static inline bool ena_xdp_legal_queue_count(struct ena_adapter *adapter, + u32 queues) { return 2 * queues <= adapter->max_num_io_queues; } From patchwork Tue Dec 8 18:02:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shay Agroskin X-Patchwork-Id: 340231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C8ACC433FE for ; Tue, 8 Dec 2020 18:05:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4292F23B54 for ; Tue, 8 Dec 2020 18:05:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730919AbgLHSFI (ORCPT ); Tue, 8 Dec 2020 13:05:08 -0500 Received: from smtp-fw-33001.amazon.com ([207.171.190.10]:54724 "EHLO smtp-fw-33001.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728633AbgLHSFI (ORCPT ); Tue, 8 Dec 2020 13:05:08 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1607450708; x=1638986708; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SXR4fM4qPWesk4LzpW8hdbFHLJiBic7jpYk/wu+Dmt8=; b=JD63yplojkOJL3RuJIBosqATklXxexzoYsS6q1sivS0p8T/AN9/0m9+6 1Ji9kcpjGbFzNM/4dFG49TZRTPMF2hKTl/si738yIDa6QCQCgf/jcsHiE WTOiVYaojz0DeIntnu57CMd2ss+o3pBpRs+P7ziKnbPv8DITAFaKJQqO+ M=; X-IronPort-AV: E=Sophos;i="5.78,403,1599523200"; d="scan'208";a="101392199" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2b-c300ac87.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 08 Dec 2020 18:04:20 +0000 Received: from EX13D28EUC001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-2b-c300ac87.us-west-2.amazon.com (Postfix) with ESMTPS id 4B7D9A229B; Tue, 8 Dec 2020 18:04:18 +0000 (UTC) Received: from u68c7b5b1d2d758.ant.amazon.com (10.43.162.144) by EX13D28EUC001.ant.amazon.com (10.43.164.4) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 8 Dec 2020 18:04:09 +0000 From: Shay Agroskin To: Jakub Kicinski , CC: Shay Agroskin , David Woodhouse , Zorik Machulsky , Alexander Matushevsky , Bshara Saeed , Matt Wilson , Anthony Liguori , Nafea Bshara , Guy Tzalik , Netanel Belgazal , Ali Saidi , Benjamin Herrenschmidt , Arthur Kiyanovski , Samih Jubran , Noam Dagan Subject: [PATCH net-next v5 5/9] net: ena: aggregate stats increase into a function Date: Tue, 8 Dec 2020 20:02:04 +0200 Message-ID: <20201208180208.26111-6-shayagr@amazon.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201208180208.26111-1-shayagr@amazon.com> References: <20201208180208.26111-1-shayagr@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.162.144] X-ClientProxiedBy: EX13D07UWA003.ant.amazon.com (10.43.160.35) To EX13D28EUC001.ant.amazon.com (10.43.164.4) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce ena_increase_stat() function to increase statistics by a certain number. The function includes the - lock aquire (on 32bit machines) - stat increase - lock release (on 32bit machines) line sequence that is ubiquitous across the driver. The function increases a single stat at a time and several stats which are increased together weren't put into a function to avoid calling the function several times for each stat which looks bad and might decrease performance. Signed-off-by: Shay Agroskin --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 167 ++++++++----------- 1 file changed, 68 insertions(+), 99 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index b3eef9bd4716..0c17e5b37fc4 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -80,6 +80,15 @@ static void ena_unmap_tx_buff(struct ena_ring *tx_ring, static int ena_create_io_tx_queues_in_range(struct ena_adapter *adapter, int first_index, int count); +/* Increase a stat by cnt while holding syncp seqlock on 32bit machines */ +static void ena_increase_stat(u64 *statp, u64 cnt, + struct u64_stats_sync *syncp) +{ + u64_stats_update_begin(syncp); + (*statp) += cnt; + u64_stats_update_end(syncp); +} + static void ena_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct ena_adapter *adapter = netdev_priv(dev); @@ -92,9 +101,7 @@ static void ena_tx_timeout(struct net_device *dev, unsigned int txqueue) return; adapter->reset_reason = ENA_REGS_RESET_OS_NETDEV_WD; - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.tx_timeout++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.tx_timeout, 1, &adapter->syncp); netif_err(adapter, tx_err, dev, "Transmit time out\n"); } @@ -154,9 +161,8 @@ static int ena_xmit_common(struct net_device *dev, if (unlikely(rc)) { netif_err(adapter, tx_queued, dev, "Failed to prepare tx bufs\n"); - u64_stats_update_begin(&ring->syncp); - ring->tx_stats.prepare_ctx_err++; - u64_stats_update_end(&ring->syncp); + ena_increase_stat(&ring->tx_stats.prepare_ctx_err, 1, + &ring->syncp); if (rc != -ENOMEM) { adapter->reset_reason = ENA_REGS_RESET_DRIVER_INVALID_STATE; @@ -264,9 +270,8 @@ static int ena_xdp_tx_map_buff(struct ena_ring *xdp_ring, return 0; error_report_dma_error: - u64_stats_update_begin(&xdp_ring->syncp); - xdp_ring->tx_stats.dma_mapping_err++; - u64_stats_update_end(&xdp_ring->syncp); + ena_increase_stat(&xdp_ring->tx_stats.dma_mapping_err, 1, + &xdp_ring->syncp); netif_warn(adapter, tx_queued, adapter->netdev, "Failed to map xdp buff\n"); xdp_return_frame_rx_napi(tx_info->xdpf); @@ -320,9 +325,7 @@ static int ena_xdp_xmit_buff(struct net_device *dev, * has a mb */ ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq); - u64_stats_update_begin(&xdp_ring->syncp); - xdp_ring->tx_stats.doorbells++; - u64_stats_update_end(&xdp_ring->syncp); + ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1, &xdp_ring->syncp); return NETDEV_TX_OK; @@ -369,9 +372,7 @@ static int ena_xdp_execute(struct ena_ring *rx_ring, xdp_stat = &rx_ring->rx_stats.xdp_invalid; } - u64_stats_update_begin(&rx_ring->syncp); - (*xdp_stat)++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(xdp_stat, 1, &rx_ring->syncp); out: rcu_read_unlock(); @@ -924,9 +925,8 @@ static int ena_alloc_rx_page(struct ena_ring *rx_ring, page = alloc_page(gfp); if (unlikely(!page)) { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.page_alloc_fail++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.page_alloc_fail, 1, + &rx_ring->syncp); return -ENOMEM; } @@ -936,9 +936,8 @@ static int ena_alloc_rx_page(struct ena_ring *rx_ring, dma = dma_map_page(rx_ring->dev, page, 0, ENA_PAGE_SIZE, DMA_BIDIRECTIONAL); if (unlikely(dma_mapping_error(rx_ring->dev, dma))) { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.dma_mapping_err++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.dma_mapping_err, 1, + &rx_ring->syncp); __free_page(page); return -EIO; @@ -1011,9 +1010,8 @@ static int ena_refill_rx_bufs(struct ena_ring *rx_ring, u32 num) } if (unlikely(i < num)) { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.refil_partial++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.refil_partial, 1, + &rx_ring->syncp); netif_warn(rx_ring->adapter, rx_err, rx_ring->netdev, "Refilled rx qid %d with only %d buffers (from %d)\n", rx_ring->qid, i, num); @@ -1189,9 +1187,7 @@ static int handle_invalid_req_id(struct ena_ring *ring, u16 req_id, "Invalid req_id: %hu\n", req_id); - u64_stats_update_begin(&ring->syncp); - ring->tx_stats.bad_req_id++; - u64_stats_update_end(&ring->syncp); + ena_increase_stat(&ring->tx_stats.bad_req_id, 1, &ring->syncp); /* Trigger device reset */ ring->adapter->reset_reason = ENA_REGS_RESET_INV_TX_REQ_ID; @@ -1302,9 +1298,8 @@ static int ena_clean_tx_irq(struct ena_ring *tx_ring, u32 budget) if (netif_tx_queue_stopped(txq) && above_thresh && test_bit(ENA_FLAG_DEV_UP, &tx_ring->adapter->flags)) { netif_tx_wake_queue(txq); - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.queue_wakeup++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.queue_wakeup, 1, + &tx_ring->syncp); } __netif_tx_unlock(txq); } @@ -1323,9 +1318,8 @@ static struct sk_buff *ena_alloc_skb(struct ena_ring *rx_ring, bool frags) rx_ring->rx_copybreak); if (unlikely(!skb)) { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.skb_alloc_fail++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.skb_alloc_fail, 1, + &rx_ring->syncp); netif_dbg(rx_ring->adapter, rx_err, rx_ring->netdev, "Failed to allocate skb. frags: %d\n", frags); return NULL; @@ -1453,9 +1447,8 @@ static void ena_rx_checksum(struct ena_ring *rx_ring, (ena_rx_ctx->l3_csum_err))) { /* ipv4 checksum error */ skb->ip_summed = CHECKSUM_NONE; - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.bad_csum++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.bad_csum, 1, + &rx_ring->syncp); netif_dbg(rx_ring->adapter, rx_err, rx_ring->netdev, "RX IPv4 header checksum error\n"); return; @@ -1466,9 +1459,8 @@ static void ena_rx_checksum(struct ena_ring *rx_ring, (ena_rx_ctx->l4_proto == ENA_ETH_IO_L4_PROTO_UDP))) { if (unlikely(ena_rx_ctx->l4_csum_err)) { /* TCP/UDP checksum error */ - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.bad_csum++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.bad_csum, 1, + &rx_ring->syncp); netif_dbg(rx_ring->adapter, rx_err, rx_ring->netdev, "RX L4 checksum error\n"); skb->ip_summed = CHECKSUM_NONE; @@ -1477,13 +1469,11 @@ static void ena_rx_checksum(struct ena_ring *rx_ring, if (likely(ena_rx_ctx->l4_csum_checked)) { skb->ip_summed = CHECKSUM_UNNECESSARY; - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.csum_good++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.csum_good, 1, + &rx_ring->syncp); } else { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.csum_unchecked++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.csum_unchecked, 1, + &rx_ring->syncp); skb->ip_summed = CHECKSUM_NONE; } } else { @@ -1675,14 +1665,12 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi, adapter = netdev_priv(rx_ring->netdev); if (rc == -ENOSPC) { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.bad_desc_num++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.bad_desc_num, 1, + &rx_ring->syncp); adapter->reset_reason = ENA_REGS_RESET_TOO_MANY_RX_DESCS; } else { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.bad_req_id++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.bad_req_id, 1, + &rx_ring->syncp); adapter->reset_reason = ENA_REGS_RESET_INV_RX_REQ_ID; } @@ -1743,9 +1731,8 @@ static void ena_unmask_interrupt(struct ena_ring *tx_ring, tx_ring->smoothed_interval, true); - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.unmask_interrupt++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.unmask_interrupt, 1, + &tx_ring->syncp); /* It is a shared MSI-X. * Tx and Rx CQ have pointer to it. @@ -2552,9 +2539,8 @@ static int ena_up(struct ena_adapter *adapter) if (test_bit(ENA_FLAG_LINK_UP, &adapter->flags)) netif_carrier_on(adapter->netdev); - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.interface_up++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.interface_up, 1, + &adapter->syncp); set_bit(ENA_FLAG_DEV_UP, &adapter->flags); @@ -2592,9 +2578,8 @@ static void ena_down(struct ena_adapter *adapter) clear_bit(ENA_FLAG_DEV_UP, &adapter->flags); - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.interface_down++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.interface_down, 1, + &adapter->syncp); netif_carrier_off(adapter->netdev); netif_tx_disable(adapter->netdev); @@ -2822,15 +2807,12 @@ static int ena_check_and_linearize_skb(struct ena_ring *tx_ring, (header_len < tx_ring->tx_max_header_size)) return 0; - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.linearize++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.linearize, 1, &tx_ring->syncp); rc = skb_linearize(skb); if (unlikely(rc)) { - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.linearize_failed++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.linearize_failed, 1, + &tx_ring->syncp); } return rc; @@ -2870,9 +2852,8 @@ static int ena_tx_map_skb(struct ena_ring *tx_ring, tx_ring->push_buf_intermediate_buf); *header_len = push_len; if (unlikely(skb->data != *push_hdr)) { - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.llq_buffer_copy++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.llq_buffer_copy, 1, + &tx_ring->syncp); delta = push_len - skb_head_len; } @@ -2929,9 +2910,8 @@ static int ena_tx_map_skb(struct ena_ring *tx_ring, return 0; error_report_dma_error: - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.dma_mapping_err++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.dma_mapping_err, 1, + &tx_ring->syncp); netif_warn(adapter, tx_queued, adapter->netdev, "Failed to map skb\n"); tx_info->skb = NULL; @@ -3008,9 +2988,8 @@ static netdev_tx_t ena_start_xmit(struct sk_buff *skb, struct net_device *dev) __func__, qid); netif_tx_stop_queue(txq); - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.queue_stop++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.queue_stop, 1, + &tx_ring->syncp); /* There is a rare condition where this function decide to * stop the queue but meanwhile clean_tx_irq updates @@ -3025,9 +3004,8 @@ static netdev_tx_t ena_start_xmit(struct sk_buff *skb, struct net_device *dev) if (ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, ENA_TX_WAKEUP_THRESH)) { netif_tx_wake_queue(txq); - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.queue_wakeup++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.queue_wakeup, 1, + &tx_ring->syncp); } } @@ -3036,9 +3014,8 @@ static netdev_tx_t ena_start_xmit(struct sk_buff *skb, struct net_device *dev) * has a mb */ ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.doorbells++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.doorbells, 1, + &tx_ring->syncp); } return NETDEV_TX_OK; @@ -3673,9 +3650,8 @@ static int check_missing_comp_in_tx_queue(struct ena_adapter *adapter, rc = -EIO; } - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.missed_tx += missed_tx; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.missed_tx, missed_tx, + &tx_ring->syncp); return rc; } @@ -3758,9 +3734,8 @@ static void check_for_empty_rx_ring(struct ena_adapter *adapter) rx_ring->empty_rx_queue++; if (rx_ring->empty_rx_queue >= EMPTY_RX_REFILL) { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.empty_rx_ring++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.empty_rx_ring, 1, + &rx_ring->syncp); netif_err(adapter, drv, adapter->netdev, "Trigger refill for ring %d\n", i); @@ -3790,9 +3765,8 @@ static void check_for_missing_keep_alive(struct ena_adapter *adapter) if (unlikely(time_is_before_jiffies(keep_alive_expired))) { netif_err(adapter, drv, adapter->netdev, "Keep alive watchdog timeout.\n"); - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.wd_expired++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.wd_expired, 1, + &adapter->syncp); adapter->reset_reason = ENA_REGS_RESET_KEEP_ALIVE_TO; set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags); } @@ -3803,9 +3777,8 @@ static void check_for_admin_com_state(struct ena_adapter *adapter) if (unlikely(!ena_com_get_admin_running_state(adapter->ena_dev))) { netif_err(adapter, drv, adapter->netdev, "ENA admin queue is not in running state!\n"); - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.admin_q_pause++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.admin_q_pause, 1, + &adapter->syncp); adapter->reset_reason = ENA_REGS_RESET_ADMIN_TO; set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags); } @@ -4441,9 +4414,7 @@ static int __maybe_unused ena_suspend(struct device *dev_d) struct pci_dev *pdev = to_pci_dev(dev_d); struct ena_adapter *adapter = pci_get_drvdata(pdev); - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.suspend++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.suspend, 1, &adapter->syncp); rtnl_lock(); if (unlikely(test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags))) { @@ -4464,9 +4435,7 @@ static int __maybe_unused ena_resume(struct device *dev_d) struct ena_adapter *adapter = dev_get_drvdata(dev_d); int rc; - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.resume++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.resume, 1, &adapter->syncp); rtnl_lock(); rc = ena_restore_device(adapter); From patchwork Tue Dec 8 18:02:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shay Agroskin X-Patchwork-Id: 340230 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60F94C433FE for ; Tue, 8 Dec 2020 18:05:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2D1B523B5D for ; Tue, 8 Dec 2020 18:05:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730925AbgLHSFm (ORCPT ); Tue, 8 Dec 2020 13:05:42 -0500 Received: from smtp-fw-6002.amazon.com ([52.95.49.90]:46750 "EHLO smtp-fw-6002.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730454AbgLHSFm (ORCPT ); Tue, 8 Dec 2020 13:05:42 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1607450741; x=1638986741; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ssie3zDfE8sIjALIu1EqjgV6LQeozctX0vueyQRNyIY=; b=d5IsZq6CvB8NKl5FEqjm9PFKL0bbTLwuwDONu3G5LWrQ8OAImfQ6sO3s Q5snkBPo9Yvg7a71djmRHgWnNLdCb5GZl+WrCQ2UcJmc51TmR68go4Fr1 gbxkco8etFgY2diy6f1ep+daVhOo48tzpXeIDiG6n/WtEe4FJtNVlswDP I=; X-IronPort-AV: E=Sophos;i="5.78,403,1599523200"; d="scan'208";a="69860068" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com) ([10.43.8.2]) by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP; 08 Dec 2020 18:05:25 +0000 Received: from EX13D28EUC001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com (Postfix) with ESMTPS id C3DDBA1CD1; Tue, 8 Dec 2020 18:05:23 +0000 (UTC) Received: from u68c7b5b1d2d758.ant.amazon.com (10.43.161.102) by EX13D28EUC001.ant.amazon.com (10.43.164.4) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 8 Dec 2020 18:05:14 +0000 From: Shay Agroskin To: Jakub Kicinski , CC: Shay Agroskin , David Woodhouse , Zorik Machulsky , Alexander Matushevsky , Bshara Saeed , Matt Wilson , Anthony Liguori , Nafea Bshara , Guy Tzalik , Netanel Belgazal , Ali Saidi , Benjamin Herrenschmidt , Arthur Kiyanovski , Samih Jubran , Noam Dagan , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Subject: [PATCH net-next v5 8/9] net: ena: use xdp_return_frame() to free xdp frames Date: Tue, 8 Dec 2020 20:02:07 +0200 Message-ID: <20201208180208.26111-9-shayagr@amazon.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201208180208.26111-1-shayagr@amazon.com> References: <20201208180208.26111-1-shayagr@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.102] X-ClientProxiedBy: EX13D45UWA003.ant.amazon.com (10.43.160.92) To EX13D28EUC001.ant.amazon.com (10.43.164.4) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org XDP subsystem has a function to free XDP frames and their associated pages. Using this function would help the driver's XDP implementation to adjust to new changes in the XDP subsystem in the kernel (e.g. introduction of XDP MB). Also, remove 'xdp_rx_page' field from ena_tx_buffer struct since it is no longer used. Signed-off-by: Shay Agroskin --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 3 +-- drivers/net/ethernet/amazon/ena/ena_netdev.h | 6 ------ 2 files changed, 1 insertion(+), 8 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index d47814b16834..a4dc794e7389 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -299,7 +299,6 @@ static int ena_xdp_xmit_frame(struct net_device *dev, req_id = xdp_ring->free_ids[next_to_use]; tx_info = &xdp_ring->tx_buffer_info[req_id]; tx_info->num_of_bufs = 0; - tx_info->xdp_rx_page = virt_to_page(xdpf->data); rc = ena_xdp_tx_map_frame(xdp_ring, tx_info, xdpf, &push_hdr, &push_len); if (unlikely(rc)) @@ -1836,7 +1835,7 @@ static int ena_clean_xdp_irq(struct ena_ring *xdp_ring, u32 budget) tx_pkts++; total_done += tx_info->tx_descs; - __free_page(tx_info->xdp_rx_page); + xdp_return_frame(xdpf); xdp_ring->free_ids[next_to_clean] = req_id; next_to_clean = ENA_TX_RING_IDX_NEXT(next_to_clean, xdp_ring->ring_size); diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h index 0fef876c23eb..fed79c50a870 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.h +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h @@ -170,12 +170,6 @@ struct ena_tx_buffer { * the xdp queues */ struct xdp_frame *xdpf; - /* The rx page for the rx buffer that was received in rx and - * re transmitted on xdp tx queues as a result of XDP_TX action. - * We need to free the page once we finished cleaning the buffer in - * clean_xdp_irq() - */ - struct page *xdp_rx_page; /* Indicate if bufs[0] map the linear data of the skb. */ u8 map_linear_data; From patchwork Tue Dec 8 18:02:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shay Agroskin X-Patchwork-Id: 340229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFADAC433FE for ; Tue, 8 Dec 2020 18:06:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98FDF23B5D for ; Tue, 8 Dec 2020 18:06:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730841AbgLHSGV (ORCPT ); Tue, 8 Dec 2020 13:06:21 -0500 Received: from smtp-fw-9102.amazon.com ([207.171.184.29]:19981 "EHLO smtp-fw-9102.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726810AbgLHSGV (ORCPT ); Tue, 8 Dec 2020 13:06:21 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1607450781; x=1638986781; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zvSHvHfWtqP7NorvJgktKcUB8eU596oBLNnwupJ4leY=; b=Lo2i39Ot2el8oaDXKCsxaY9RDx19YYRwASTkD4bN/a4LgurxdsIlYYw2 fac0Ul2Ip9HI3rNWCkPF9vQ/3ENOYKiRcOJZga4/EI5omw9xRmk/3rHsS E5RXYezJaF0i/AWvYi+AGzfx1N9ntT/h3q70G45VOisrAS3AGOrsB61Xs E=; X-IronPort-AV: E=Sophos;i="5.78,403,1599523200"; d="scan'208";a="102660600" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2c-76e0922c.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 08 Dec 2020 18:05:40 +0000 Received: from EX13D28EUC001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-2c-76e0922c.us-west-2.amazon.com (Postfix) with ESMTPS id E861E12ACF0; Tue, 8 Dec 2020 18:05:38 +0000 (UTC) Received: from u68c7b5b1d2d758.ant.amazon.com (10.43.161.102) by EX13D28EUC001.ant.amazon.com (10.43.164.4) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 8 Dec 2020 18:05:28 +0000 From: Shay Agroskin To: Jakub Kicinski , CC: Shay Agroskin , David Woodhouse , Zorik Machulsky , Alexander Matushevsky , Bshara Saeed , Matt Wilson , Anthony Liguori , Nafea Bshara , Guy Tzalik , Netanel Belgazal , Ali Saidi , Benjamin Herrenschmidt , Arthur Kiyanovski , Samih Jubran , Noam Dagan , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend Subject: [PATCH net-next v5 9/9] net: ena: introduce ndo_xdp_xmit() function for XDP_REDIRECT Date: Tue, 8 Dec 2020 20:02:08 +0200 Message-ID: <20201208180208.26111-10-shayagr@amazon.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201208180208.26111-1-shayagr@amazon.com> References: <20201208180208.26111-1-shayagr@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.102] X-ClientProxiedBy: EX13D45UWA003.ant.amazon.com (10.43.160.92) To EX13D28EUC001.ant.amazon.com (10.43.164.4) Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch implements the ndo_xdp_xmit() net_device function which is called when a packet is redirected to this driver using an XDP_REDIRECT directive. The function receives an array of xdp frames that it needs to xmit. The TX queues that are used to xmit these frames are the XDP queues used by the XDP_TX flow. Therefore a lock is added to synchronize both flows (XDP_TX and XDP_REDIRECT). Signed-off-by: Shay Agroskin --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 83 +++++++++++++++++--- drivers/net/ethernet/amazon/ena/ena_netdev.h | 1 + 2 files changed, 72 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index a4dc794e7389..06596fa1f9fe 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -281,20 +281,18 @@ static int ena_xdp_tx_map_frame(struct ena_ring *xdp_ring, return -EINVAL; } -static int ena_xdp_xmit_frame(struct net_device *dev, +static int ena_xdp_xmit_frame(struct ena_ring *xdp_ring, + struct net_device *dev, struct xdp_frame *xdpf, - int qid) + int flags) { - struct ena_adapter *adapter = netdev_priv(dev); struct ena_com_tx_ctx ena_tx_ctx = {}; struct ena_tx_buffer *tx_info; - struct ena_ring *xdp_ring; u16 next_to_use, req_id; - int rc; void *push_hdr; u32 push_len; + int rc; - xdp_ring = &adapter->tx_ring[qid]; next_to_use = xdp_ring->next_to_use; req_id = xdp_ring->free_ids[next_to_use]; tx_info = &xdp_ring->tx_buffer_info[req_id]; @@ -321,25 +319,76 @@ static int ena_xdp_xmit_frame(struct net_device *dev, /* trigger the dma engine. ena_com_write_sq_doorbell() * has a mb */ - ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq); - ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1, &xdp_ring->syncp); + if (flags & XDP_XMIT_FLUSH) { + ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq); + ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1, + &xdp_ring->syncp); + } - return NETDEV_TX_OK; + return rc; error_unmap_dma: ena_unmap_tx_buff(xdp_ring, tx_info); tx_info->xdpf = NULL; error_drop_packet: xdp_return_frame(xdpf); - return NETDEV_TX_OK; + return rc; +} + +static int ena_xdp_xmit(struct net_device *dev, int n, + struct xdp_frame **frames, u32 flags) +{ + struct ena_adapter *adapter = netdev_priv(dev); + int qid, i, err, drops = 0; + struct ena_ring *xdp_ring; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + if (!test_bit(ENA_FLAG_DEV_UP, &adapter->flags)) + return -ENETDOWN; + + /* We assume that all rings have the same XDP program */ + if (!READ_ONCE(adapter->rx_ring->xdp_bpf_prog)) + return -ENXIO; + + qid = smp_processor_id() % adapter->xdp_num_queues; + qid += adapter->xdp_first_ring; + xdp_ring = &adapter->tx_ring[qid]; + + /* Other CPU ids might try to send thorugh this queue */ + spin_lock(&xdp_ring->xdp_tx_lock); + + for (i = 0; i < n; i++) { + err = ena_xdp_xmit_frame(xdp_ring, dev, frames[i], 0); + /* The descriptor is freed by ena_xdp_xmit_frame in case + * of an error. + */ + if (err) + drops++; + } + + /* Ring doorbell to make device aware of the packets */ + if (flags & XDP_XMIT_FLUSH) { + ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq); + ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1, + &xdp_ring->syncp); + } + + spin_unlock(&xdp_ring->xdp_tx_lock); + + /* Return number of packets sent */ + return n - drops; } static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp) { struct bpf_prog *xdp_prog; + struct ena_ring *xdp_ring; u32 verdict = XDP_PASS; struct xdp_frame *xdpf; u64 *xdp_stat; + int qid; rcu_read_lock(); xdp_prog = READ_ONCE(rx_ring->xdp_bpf_prog); @@ -358,8 +407,16 @@ static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp) break; } - ena_xdp_xmit_frame(rx_ring->netdev, xdpf, - rx_ring->qid + rx_ring->adapter->num_io_queues); + /* Find xmit queue */ + qid = rx_ring->qid + rx_ring->adapter->num_io_queues; + xdp_ring = &rx_ring->adapter->tx_ring[qid]; + + /* The XDP queues are shared between XDP_TX and XDP_REDIRECT */ + spin_lock(&xdp_ring->xdp_tx_lock); + + ena_xdp_xmit_frame(xdp_ring, rx_ring->netdev, xdpf, XDP_XMIT_FLUSH); + + spin_unlock(&xdp_ring->xdp_tx_lock); xdp_stat = &rx_ring->rx_stats.xdp_tx; break; case XDP_REDIRECT: @@ -652,6 +709,7 @@ static void ena_init_io_rings(struct ena_adapter *adapter, txr->smoothed_interval = ena_com_get_nonadaptive_moderation_interval_tx(ena_dev); txr->disable_meta_caching = adapter->disable_meta_caching; + spin_lock_init(&txr->xdp_tx_lock); /* Don't init RX queues for xdp queues */ if (!ENA_IS_XDP_INDEX(adapter, i)) { @@ -3244,6 +3302,7 @@ static const struct net_device_ops ena_netdev_ops = { .ndo_set_mac_address = NULL, .ndo_validate_addr = eth_validate_addr, .ndo_bpf = ena_xdp, + .ndo_xdp_xmit = ena_xdp_xmit, }; static int ena_device_validate_params(struct ena_adapter *adapter, diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h index fed79c50a870..74af15d62ee1 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.h +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h @@ -258,6 +258,7 @@ struct ena_ring { struct ena_com_io_sq *ena_com_io_sq; struct bpf_prog *xdp_bpf_prog; struct xdp_rxq_info xdp_rxq; + spinlock_t xdp_tx_lock; /* synchronize XDP TX/Redirect traffic */ u16 next_to_use; u16 next_to_clean;