From patchwork Wed Jan 20 14:27:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 367040 Delivered-To: patch@linaro.org Received: by 2002:a17:906:24d5:0:0:0:0 with SMTP id f21csp227237ejb; Wed, 20 Jan 2021 06:38:34 -0800 (PST) X-Google-Smtp-Source: ABdhPJwXdpmAOocWoFthJwItGKSSkojjom+0bE6/5vhzZbTqFPCaXRTJ1iCwmIilDmNSipPqqHI+ X-Received: by 2002:a17:906:6b91:: with SMTP id l17mr5994074ejr.171.1611153514537; Wed, 20 Jan 2021 06:38:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611153514; cv=none; d=google.com; s=arc-20160816; b=xWw6xuS8zD/tNghxY6O86TK1KNyu8GJ156BD5D6mghIoFDOF1Ci4yFETuVMbNi4Wwo 9JCLytqIrFaN1jFYkcBeAEcMrjqJCst1Z8VeIQ0aGDjg5uk1YZRVB9Nn/XMwQBU7Za2P eG5phZJdT1YD+B33rAiY6nTmacJ63dngZP/DhcovQnFXOQW3b+l9vjY4MgpcGKWBNGCD 36oCBVSBafbkGLNDlQXQhiizpxswbucgt+iVcULX0Xyr+m4/nywKSl3jgY3P3svxq8KF hc8g6WmX0ZW6BHOcNvqrqpJtzGwhwPAah2M5+e9jG4PiLTM3UyZEkYIW5ShySyxdAdVc nzbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from; bh=3EzmxqJVEY3CQmD7YYoXeyyrffKDg45JAag+9CC96Tg=; b=DdHp2vGQgY2coausHRYPF3VsYoUbnDhC1IklZcfzEGL+ry9jQTZe3oJ21l3+O6hPjY DdGLxJZk/36eU98PQp6P0uXZB3tmxoDOQXbPby6dZy9wIKL6DxKEaCFCZUHBawglYr0N kjyTYCWe4k/sByeJdL2sNnMepDyzOBAwsJwtX8LHxaFcTPOgUqQeJuXPH7i34AFnJUSQ i4M1maRu7CBXjzRMJ5gDoh485/h0ViqRVqy29Aszr4DrFab+VLTLeS1XoimbRz/EmCQc oSHuYn42tzRhRzVZaBrfxVs9JjG4UCrFDYaXTU2wiUnAVu3inoGIzZYx176ANIbJ+8Q0 NmFQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from mails.dpdk.org (mails.dpdk.org. [217.70.189.124]) by mx.google.com with ESMTP id t5si887960edw.52.2021.01.20.06.38.34; Wed, 20 Jan 2021 06:38:34 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) client-ip=217.70.189.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5B783140D98; Wed, 20 Jan 2021 15:38:23 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id C88B3140D20 for ; Wed, 20 Jan 2021 15:38:19 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id D20951A0A5A; Wed, 20 Jan 2021 15:38:18 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 8E2D41A1499; Wed, 20 Jan 2021 15:38:17 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 8AD694031C; Wed, 20 Jan 2021 15:38:15 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Wed, 20 Jan 2021 19:57:16 +0530 Message-Id: <20210120142723.14090-2-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210120142723.14090-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 1/7] bus/fslmc: fix to use ci value for qbman 5.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Youri Querry since for qbman 5.0 generally, pi == ci, no need for extra checks. They are causing issue. This fixes few random packet hang issues in event mode. Fixes: 1b49352f41be ("bus/fslmc: rename portal pi index to consumer index") Cc: stable@dpdk.org Signed-off-by: Youri Querry Acked-by: Hemant Agrawal --- drivers/bus/fslmc/qbman/qbman_portal.c | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c index 77c9d508c4..aedcad9258 100644 --- a/drivers/bus/fslmc/qbman/qbman_portal.c +++ b/drivers/bus/fslmc/qbman/qbman_portal.c @@ -339,17 +339,9 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d) eqcr_pi = qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_EQCR_PI); p->eqcr.pi = eqcr_pi & p->eqcr.pi_ci_mask; p->eqcr.pi_vb = eqcr_pi & QB_VALID_BIT; - if ((p->desc.qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 - && (d->cena_access_mode == qman_cena_fastest_access)) - p->eqcr.ci = qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_EQCR_PI) - & p->eqcr.pi_ci_mask; - else - p->eqcr.ci = qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_EQCR_CI) - & p->eqcr.pi_ci_mask; - p->eqcr.available = p->eqcr.pi_ring_size - - qm_cyc_diff(p->eqcr.pi_ring_size, - p->eqcr.ci & (p->eqcr.pi_ci_mask<<1), - p->eqcr.pi & (p->eqcr.pi_ci_mask<<1)); + p->eqcr.ci = qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_EQCR_CI) + & p->eqcr.pi_ci_mask; + p->eqcr.available = p->eqcr.pi_ring_size; portal_idx_map[p->desc.idx] = p; return p; From patchwork Wed Jan 20 14:27:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 367039 Delivered-To: patch@linaro.org Received: by 2002:a17:906:24d5:0:0:0:0 with SMTP id f21csp227125ejb; Wed, 20 Jan 2021 06:38:26 -0800 (PST) X-Google-Smtp-Source: ABdhPJwqZSJLVk6QIyiAKDdBNnmhntGFMbEACb+rfp3Et9zPQL8w9GRBQtKbj3k+VYIsPMSjHnET X-Received: by 2002:a05:6402:513:: with SMTP id m19mr7639825edv.229.1611153506308; Wed, 20 Jan 2021 06:38:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611153506; cv=none; d=google.com; s=arc-20160816; b=YO/cnrqZA8Na7F78DDx4lIBIeiyCh+IzMfw6sXKfncu3xf8kJ/2kt2KtK6Et2+8ZT4 Z4WCfpsSW7lmLzCJH9TqFD2fMTDeqnYehAW0f3npJ3WtDLTYkOConQ6NcNBxWGtBPRGx gGhfBXxwVvZHeBnbM7Ah+RJ3V3KEJ4fqY7eIfyRsRag79Po6Y0WBz9kBJCTFjTAYVjWQ IBezL/FyhycmBY9+EGEO9hPlyM2+B1ZWjB+JmamS/le9BSkpFLsUUhobs0ptcuJfo/HQ aXEc816uoMk/AT/kRtvAdE78PIoJXuIPjfaaR/pOUxRo4up3PNiBXZvdjtDRB/QWNeez Rfkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from; bh=eUwe9k/j7YrpEwSzTb/UqqqAdZZi680+YrBwNjlH6iM=; b=hYQPf42wg0MmuhkY0hLvL8tVfe05aLZE94qferVOgokdheBRZxPQX7KaDLvNN6dL0L yBe+cuFz/rbfb5N7qRMPjREL1FXIL+N7Z5s5SJDB+9+hBhEVKVPBMhRWJ+GmrkxxPBaE G7nUcwuq7BlVUHMsrYVkv6ZUGT+vR5LsxtZfFgOz2SwKlRI8Nf24ZtRCwLwhv3chZ93b CqyFXO5JF1+EyB1lhxgMSLNbgzIvxHY/DEgdu6mVZgTIvqvc3FXU7Y8udIeoKX3z9Jk+ K5H3FPgtSdCpvk+3xHoGZwsp/Di8+8Upb8fTljhHIsHyCt4he5hb0e0ACsM/I7ARz008 V94A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from mails.dpdk.org (mails.dpdk.org. [217.70.189.124]) by mx.google.com with ESMTP id l15si715045ejn.754.2021.01.20.06.38.26; Wed, 20 Jan 2021 06:38:26 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) client-ip=217.70.189.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 192F5140D5B; Wed, 20 Jan 2021 15:38:21 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 702B3140D21 for ; Wed, 20 Jan 2021 15:38:19 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 02E311A14B4; Wed, 20 Jan 2021 15:38:19 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id B38871A14A8; Wed, 20 Jan 2021 15:38:17 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 12BEB4031F; Wed, 20 Jan 2021 15:38:15 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Wed, 20 Jan 2021 19:57:17 +0530 Message-Id: <20210120142723.14090-3-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210120142723.14090-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 2/7] net/dpaa2: fix link get API implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rohit Raj According to DPDK Documentation, rte_eth_link_get API can wait upto 9 seconds for auto-negotiation to finish and then returns link status. In current implementation of rte_eth_link_get API in DPAA2 drivers, it wasn't waiting for auto negotiation to finish and was returning link status DOWN It can cause issues with DPDK applications which relies on rte_eth_link_get API for link status and doesn't support link status interrupt. Similar kind of issue was seen in TRex Application. This patch fixes this bug by adding wait for up to 9 seconds for auto negotiation to finish. Fixes: c56c86ff87c1 ("net/dpaa2: update link status") Cc: stable@dpdk.org Signed-off-by: Rohit Raj Acked-by: Hemant Agrawal --- drivers/net/dpaa2/dpaa2_ethdev.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 6f38da3cce..5c1a12b841 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -31,6 +31,8 @@ #define DRIVER_LOOPBACK_MODE "drv_loopback" #define DRIVER_NO_PREFETCH_MODE "drv_no_prefetch" +#define CHECK_INTERVAL 100 /* 100ms */ +#define MAX_REPEAT_TIME 90 /* 9s (90 * 100ms) in total */ /* Supported Rx offloads */ static uint64_t dev_rx_offloads_sup = @@ -1805,23 +1807,32 @@ dpaa2_dev_stats_reset(struct rte_eth_dev *dev) /* return 0 means link status changed, -1 means not changed */ static int dpaa2_dev_link_update(struct rte_eth_dev *dev, - int wait_to_complete __rte_unused) + int wait_to_complete) { int ret; struct dpaa2_dev_priv *priv = dev->data->dev_private; struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private; struct rte_eth_link link; struct dpni_link_state state = {0}; + uint8_t count; if (dpni == NULL) { DPAA2_PMD_ERR("dpni is NULL"); return 0; } - ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, &state); - if (ret < 0) { - DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret); - return -1; + for (count = 0; count <= MAX_REPEAT_TIME; count++) { + ret = dpni_get_link_state(dpni, CMD_PRI_LOW, priv->token, + &state); + if (ret < 0) { + DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret); + return -1; + } + if (state.up == ETH_LINK_DOWN && + wait_to_complete) + rte_delay_ms(CHECK_INTERVAL); + else + break; } memset(&link, 0, sizeof(struct rte_eth_link)); From patchwork Wed Jan 20 14:27:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 367041 Delivered-To: patch@linaro.org Received: by 2002:a17:906:24d5:0:0:0:0 with SMTP id f21csp227368ejb; Wed, 20 Jan 2021 06:38:43 -0800 (PST) X-Google-Smtp-Source: ABdhPJzS+J6BvVPiD4R5ixB73s60y5d6/7OYj0UKuvmYmOEQiHZ5Dv6/7HIOkSt3ICQxOxV+VKUI X-Received: by 2002:a17:906:941a:: with SMTP id q26mr6590607ejx.266.1611153523644; Wed, 20 Jan 2021 06:38:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611153523; cv=none; d=google.com; s=arc-20160816; b=yyKJ2O/XAJ1mCID90NLhgq+QMny9R0CPScpAaMyFyaZGhaWyYYTgIp/NXBLizdKESH KO8f5/jFdfPwKeCuhoI/grMgYA9sRyEHwM/fXiIKFyDl6nHwlSUUKzyiY3F5O6vL1oQ5 hSQtoH2s77LhHbqgCvsa5zj7mgMAhm8eKcz74mDqcRPuPXJGGxufk9PiWESVkZM7Lnv7 fSrJdwHI7owLZ/7b663xABngItKReaYwhvIXyLOOkmnV335pYHNR+pEkAsCMdoqVMgG5 /bz+OSiCk7RXxGWptW7G74AR4prvVwaeaApYL/GwupjYhEQjQVv2MkauR19KO1pZ//Rb Ya9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from; bh=zDCai1BKGNQsGC7RDYOG2+d+DuiIN/Ga+MorJNDThQQ=; b=DKz06V5Aj1KwfZKmh5DaLRCtq9hzi9DPHQROmZ4eI6k2O/u6CyE3fdkZ4+32YG4kmz khWV8FxpT3VmmLYDvP2o4bBXGGzXnOcd2bXJYa09RjYYpREjLdlCxm1mzjslj7prgzXh B7Y+/i9+E0EgjVBms5VEQBBg0xKxxaSTfq6npeOVRQ+jL7L7qVFFXP6UN2j9QQEeqxL1 dpGlVpkrIV693hMDTtJWiW+Wx1otNT0rnuYknmnRl7gIgO5716sOq2G1ix88KavK2G0q lhxBcvvkiGxQTMZ2q0cUtXk0v6RF3hZ/49LdbrWS4tJDSZN82C6NkGZFWfGLP7K81Wvx iXUg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from mails.dpdk.org (mails.dpdk.org. [217.70.189.124]) by mx.google.com with ESMTP id u19si925547edo.452.2021.01.20.06.38.43; Wed, 20 Jan 2021 06:38:43 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) client-ip=217.70.189.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A5029140DB4; Wed, 20 Jan 2021 15:38:24 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 291E4140D3A for ; Wed, 20 Jan 2021 15:38:20 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 0C64E1A14A8; Wed, 20 Jan 2021 15:38:20 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 93C591A14AE; Wed, 20 Jan 2021 15:38:18 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 8F3F9402A9; Wed, 20 Jan 2021 15:38:16 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Wed, 20 Jan 2021 19:57:18 +0530 Message-Id: <20210120142723.14090-4-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210120142723.14090-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 3/7] net/dpaa2: allocate SGT table from first segment X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch enables support to use the first segment headroom to build the HW requied Scatter Gather Table. (if space is available). This will avoid 1 less buffer for SG buffer creation. Signed-off-by: Sachin Saxena Signed-off-by: Hemant Agrawal --- drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 5 ++ drivers/net/dpaa2/dpaa2_rxtx.c | 65 +++++++++++++++++++------ 2 files changed, 54 insertions(+), 16 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h index ac24f01451..0f15750b6c 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h @@ -330,6 +330,11 @@ enum qbman_fd_format { } while (0) #define DPAA2_FD_GET_FORMAT(fd) (((fd)->simple.bpid_offset >> 28) & 0x3) +#define DPAA2_SG_SET_FORMAT(sg, format) do { \ + (sg)->fin_bpid_offset &= 0xCFFFFFFF; \ + (sg)->fin_bpid_offset |= (uint32_t)format << 28; \ +} while (0) + #define DPAA2_SG_SET_FINAL(sg, fin) do { \ (sg)->fin_bpid_offset &= 0x7FFFFFFF; \ (sg)->fin_bpid_offset |= (uint32_t)fin << 31; \ diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index 6b4526cd2d..7d2a7809d3 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2016-2020 NXP + * Copyright 2016-2021 NXP * */ @@ -377,25 +377,47 @@ eth_fd_to_mbuf(const struct qbman_fd *fd, static int __rte_noinline __rte_hot eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, - struct qbman_fd *fd, uint16_t bpid) + struct qbman_fd *fd, + struct rte_mempool *mp, uint16_t bpid) { struct rte_mbuf *cur_seg = mbuf, *prev_seg, *mi, *temp; struct qbman_sge *sgt, *sge = NULL; - int i; + int i, offset = 0; - temp = rte_pktmbuf_alloc(mbuf->pool); - if (temp == NULL) { - DPAA2_PMD_DP_DEBUG("No memory to allocate S/G table\n"); - return -ENOMEM; +#ifdef RTE_LIBRTE_IEEE1588 + /* annotation area for timestamp in first buffer */ + offset = 0x64; +#endif + if (RTE_MBUF_DIRECT(mbuf) && + (mbuf->data_off > (mbuf->nb_segs * sizeof(struct qbman_sge) + + offset))) { + temp = mbuf; + if (rte_mbuf_refcnt_read(temp) > 1) { + /* If refcnt > 1, invalid bpid is set to ensure + * buffer is not freed by HW + */ + fd->simple.bpid_offset = 0; + DPAA2_SET_FD_IVP(fd); + rte_mbuf_refcnt_update(temp, -1); + } else { + DPAA2_SET_ONLY_FD_BPID(fd, bpid); + } + DPAA2_SET_FD_OFFSET(fd, offset); + } else { + temp = rte_pktmbuf_alloc(mp); + if (temp == NULL) { + DPAA2_PMD_DP_DEBUG("No memory to allocate S/G table\n"); + return -ENOMEM; + } + DPAA2_SET_ONLY_FD_BPID(fd, bpid); + DPAA2_SET_FD_OFFSET(fd, temp->data_off); } - DPAA2_SET_FD_ADDR(fd, DPAA2_MBUF_VADDR_TO_IOVA(temp)); DPAA2_SET_FD_LEN(fd, mbuf->pkt_len); - DPAA2_SET_ONLY_FD_BPID(fd, bpid); - DPAA2_SET_FD_OFFSET(fd, temp->data_off); DPAA2_FD_SET_FORMAT(fd, qbman_fd_sg); DPAA2_RESET_FD_FRC(fd); DPAA2_RESET_FD_CTRL(fd); + DPAA2_RESET_FD_FLC(fd); /*Set Scatter gather table and Scatter gather entries*/ sgt = (struct qbman_sge *)( (size_t)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)) @@ -409,15 +431,24 @@ eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, DPAA2_SET_FLE_OFFSET(sge, cur_seg->data_off); sge->length = cur_seg->data_len; if (RTE_MBUF_DIRECT(cur_seg)) { - if (rte_mbuf_refcnt_read(cur_seg) > 1) { + /* if we are using inline SGT in same buffers + * set the FLE FMT as Frame Data Section + */ + if (temp == cur_seg) { + DPAA2_SG_SET_FORMAT(sge, qbman_fd_list); + DPAA2_SET_FLE_IVP(sge); + } else { + if (rte_mbuf_refcnt_read(cur_seg) > 1) { /* If refcnt > 1, invalid bpid is set to ensure * buffer is not freed by HW */ - DPAA2_SET_FLE_IVP(sge); - rte_mbuf_refcnt_update(cur_seg, -1); - } else - DPAA2_SET_FLE_BPID(sge, + DPAA2_SET_FLE_IVP(sge); + rte_mbuf_refcnt_update(cur_seg, -1); + } else { + DPAA2_SET_FLE_BPID(sge, mempool_to_bpid(cur_seg->pool)); + } + } cur_seg = cur_seg->next; } else { /* Get owner MBUF from indirect buffer */ @@ -1152,7 +1183,8 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) bpid = mempool_to_bpid(mp); if (unlikely((*bufs)->nb_segs > 1)) { if (eth_mbuf_to_sg_fd(*bufs, - &fd_arr[loop], bpid)) + &fd_arr[loop], + mp, bpid)) goto send_n_return; } else { eth_mbuf_to_fd(*bufs, @@ -1409,6 +1441,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if (unlikely((*bufs)->nb_segs > 1)) { if (eth_mbuf_to_sg_fd(*bufs, &fd_arr[loop], + mp, bpid)) goto send_n_return; } else { From patchwork Wed Jan 20 14:27:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 367042 Delivered-To: patch@linaro.org Received: by 2002:a17:906:24d5:0:0:0:0 with SMTP id f21csp227531ejb; Wed, 20 Jan 2021 06:38:54 -0800 (PST) X-Google-Smtp-Source: ABdhPJw6BWXS9Q5LrpJe9JG+Yn/xcpTbTTL50pAQBfjMS6ruCO4oCFQVseujzNNEHtaIQjnEKI3b X-Received: by 2002:a05:6402:3589:: with SMTP id y9mr7692700edc.344.1611153534138; Wed, 20 Jan 2021 06:38:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611153534; cv=none; d=google.com; s=arc-20160816; b=WTADv1U60cB9SThrgEhZ7mWxAKS2gbZZjAkVXrRHtci60HfL7rH8DDIP6B5qzJf0co Ibp+J6zvoWpz0ZdmaVhIiZ0JZTbhYJgxkJcwALRWnLhgnck6Q3SDvSXHVvmyV1LU3M9P pdQe6iKKKEbYkTw0UwSHGr6BUlZTP5mqOgHnE4kXDYMXGfWtfeuR9OacH0PYtxpjZJGa PXEq2SOmprHR+4dToMggMFvX7MMb/jhzdEzCUInFPcfH3Fdpd0ahEgQzVcuXdDdalN12 gaNbfqVixqIz3KprM3KEE9cQxwgsVusCS1UnFYIuWufpENQmOg7XHupbNJ9fgNEujcM2 N/RQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from; bh=Yie+G+rgwN4mTjKDuLcs38iD8MBEwcoB5Er54TTlG3Q=; b=Dgsc7JmFabhnar+yl6UsGvDgZclrqCJdO52F3w0A6MdM8Asr1Vl/K+vWvp4RJRKP/X shbJSWHDXPu2zvVhTzyDeRUTq30txiZUMee12KFb5G+BsHL19jnflocmXzq6NLiNNCXa GoZEhn2EpMHd1skGDM7Rk0s3VmMjvQXAEMWzjqqh2KJumyBT7pULTdnJNk+j2XdCPYzG hEcNCfMZ739CnwlcPp+n9xYk8BjvDUJmx1slf6eLivyRDOlbMQIKSDB9HOyM+Wtc4BVV AAxH3RCLCOLb3mg63T2grgYpTC1Hi2+zxYgLLM6GXWZ9+4pe8LQWnDhbtVGNo5v/pZ0E IowA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from mails.dpdk.org (mails.dpdk.org. [217.70.189.124]) by mx.google.com with ESMTP id x19si926512edr.378.2021.01.20.06.38.53; Wed, 20 Jan 2021 06:38:54 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) client-ip=217.70.189.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F2143140DC2; Wed, 20 Jan 2021 15:38:25 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id 983F6140D47 for ; Wed, 20 Jan 2021 15:38:20 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 5C4472014E3; Wed, 20 Jan 2021 15:38:20 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 1A84D2014DD; Wed, 20 Jan 2021 15:38:19 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 17A7F40324; Wed, 20 Jan 2021 15:38:17 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Wed, 20 Jan 2021 19:57:19 +0530 Message-Id: <20210120142723.14090-5-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210120142723.14090-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 4/7] net/dpaa2: support external buffers in Tx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta This patch support tx of external allocated buffers. Signed-off-by: Nipun Gupta Acked-by: Hemant Agrawal --- drivers/net/dpaa2/dpaa2_rxtx.c | 38 ++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index 7d2a7809d3..de297d8274 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -1065,6 +1065,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data; struct dpaa2_dev_priv *priv = eth_data->dev_private; uint32_t flags[MAX_TX_RING_SLOTS] = {0}; + struct rte_mbuf **orig_bufs = bufs; if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); @@ -1148,6 +1149,25 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mi = rte_mbuf_from_indirect(*bufs); mp = mi->pool; } + + if (unlikely(RTE_MBUF_HAS_EXTBUF(*bufs))) { + rte_mbuf_refcnt_update(*bufs, 1); + if (unlikely((*bufs)->nb_segs > 1)) { + if (eth_mbuf_to_sg_fd(*bufs, + &fd_arr[loop], + mp, 0)) + goto send_n_return; + } else { + eth_mbuf_to_fd(*bufs, + &fd_arr[loop], 0); + } + bufs++; +#ifdef RTE_LIBRTE_IEEE1588 + enable_tx_tstamp(&fd_arr[loop]); +#endif + continue; + } + /* Not a hw_pkt pool allocated frame */ if (unlikely(!mp || !priv->bp_list)) { DPAA2_PMD_ERR("Err: No buffer pool attached"); @@ -1220,6 +1240,15 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) nb_pkts -= loop; } dpaa2_q->tx_pkts += num_tx; + + loop = 0; + while (loop < num_tx) { + if (unlikely(RTE_MBUF_HAS_EXTBUF(*orig_bufs))) + rte_pktmbuf_free(*orig_bufs); + orig_bufs++; + loop++; + } + return num_tx; send_n_return: @@ -1246,6 +1275,15 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } skip_tx: dpaa2_q->tx_pkts += num_tx; + + loop = 0; + while (loop < num_tx) { + if (unlikely(RTE_MBUF_HAS_EXTBUF(*orig_bufs))) + rte_pktmbuf_free(*orig_bufs); + orig_bufs++; + loop++; + } + return num_tx; } From patchwork Wed Jan 20 14:27:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 367043 Delivered-To: patch@linaro.org Received: by 2002:a17:906:24d5:0:0:0:0 with SMTP id f21csp227682ejb; Wed, 20 Jan 2021 06:39:03 -0800 (PST) X-Google-Smtp-Source: ABdhPJxRJRk5Lx/Jd7jYLndL/M20839QX+FH5Gbzz+4Pf8JVt2PPNorgqe4E4aAHyAnu5uYwJdRk X-Received: by 2002:a05:6402:378:: with SMTP id s24mr7861257edw.376.1611153543409; Wed, 20 Jan 2021 06:39:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611153543; cv=none; d=google.com; s=arc-20160816; b=0Od4PGbDHpmI8K6roosy/al1ghaen+EH+UpdxvQD10hwTUoBpHUkU5fI9OEYP7iD0Y WUPhniD8FfMmk5iqBgYg/iGAWQa+RI5lp6OEUwOb/+ED2jyHX4zqrsVUnPsdloAkUoE+ 4MaZu8RCgRYiU2BiWwTn0uDNFboZuAGBaoLNJUza/WF48AxsJpVJ51cIRpO786WceHTT +swtIjKhQ6p/7f7xTEEj6pt3/jw3JFe1jIGplTYvgyoy/8qmTEzlkNBVFU1XV24VeUJa m3Kmzu0gE1ZvchIv9GvnpbDnK76OdUvqqvWLfgxrKZrdRETxtcDPLBaQhB5S7X/7LbAU l/Kg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from; bh=eKSVJMPMiiKt4pVO866oj+RsHNvpmQHi/0yovPk2Ozg=; b=GYCmKQO54gKzdiIEAx6Udw4xhi3zJzDlM6ofL5+nzC+E7Tx2NdkDGzftzQnPa5eNDJ QQxKCFafeSXBVTxpzkPS84aECPeen+/wqyNaj8/dvwqoPPjAhgEg8h6atzWhD8KGmUXf NR8ELlA18sDrUCE96YKWQxlSRo4+/ILCrO36+8rNH5h3fTl0LO9sJG+qysuQSSsqoxYP cELrvtUnEHoUT3HKdjw5aq4JBAGLlFty7MXNDcWUNf5OTdgiHry9mNNf8tSi7x3nXymx slhsOVFyQvhkRZSiBkt1Wcqp7XKpu1zE5HzNmjvMUbfSlDq4VW8isVk1AdFqpCqC4s/+ yB9Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from mails.dpdk.org (mails.dpdk.org. [217.70.189.124]) by mx.google.com with ESMTP id v21si898021edt.309.2021.01.20.06.39.03; Wed, 20 Jan 2021 06:39:03 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) client-ip=217.70.189.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4233D140DD3; Wed, 20 Jan 2021 15:38:27 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by mails.dpdk.org (Postfix) with ESMTP id BF31A140D4D for ; Wed, 20 Jan 2021 15:38:20 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id A34E92008B6; Wed, 20 Jan 2021 15:38:20 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 40D9D2008D3; Wed, 20 Jan 2021 15:38:19 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 946FC40328; Wed, 20 Jan 2021 15:38:17 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Wed, 20 Jan 2021 19:57:20 +0530 Message-Id: <20210120142723.14090-6-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210120142723.14090-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 5/7] net/dpaa: support external buffers in Tx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch support tx of external buffers Signed-off-by: Hemant Agrawal --- drivers/net/dpaa/dpaa_rxtx.c | 29 ++++++++++++++++++++++------- drivers/net/dpaa/dpaa_rxtx.h | 8 +------- 2 files changed, 23 insertions(+), 14 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c index e2459d9b99..6efa872c57 100644 --- a/drivers/net/dpaa/dpaa_rxtx.c +++ b/drivers/net/dpaa/dpaa_rxtx.c @@ -334,7 +334,7 @@ dpaa_unsegmented_checksum(struct rte_mbuf *mbuf, struct qm_fd *fd_arr) } } -struct rte_mbuf * +static struct rte_mbuf * dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid) { struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid); @@ -791,13 +791,12 @@ uint16_t dpaa_eth_queue_rx(void *q, return num_rx; } -int +static int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, struct qm_fd *fd, - uint32_t bpid) + struct dpaa_bp_info *bp_info) { struct rte_mbuf *cur_seg = mbuf, *prev_seg = NULL; - struct dpaa_bp_info *bp_info = DPAA_BPID_TO_POOL_INFO(bpid); struct rte_mbuf *temp, *mi; struct qm_sg_entry *sg_temp, *sgt; int i = 0; @@ -840,7 +839,7 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, fd->format = QM_FD_SG; fd->addr = temp->buf_iova; fd->offset = temp->data_off; - fd->bpid = bpid; + fd->bpid = bp_info ? bp_info->bpid : 0xff; fd->length20 = mbuf->pkt_len; while (i < DPAA_SGT_MAX_ENTRIES) { @@ -951,7 +950,7 @@ tx_on_dpaa_pool(struct rte_mbuf *mbuf, tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr); } else if (mbuf->nb_segs > 1 && mbuf->nb_segs <= DPAA_SGT_MAX_ENTRIES) { - if (dpaa_eth_mbuf_to_sg_fd(mbuf, fd_arr, bp_info->bpid)) { + if (dpaa_eth_mbuf_to_sg_fd(mbuf, fd_arr, bp_info)) { DPAA_PMD_DEBUG("Unable to create Scatter Gather FD"); return 1; } @@ -1055,6 +1054,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) uint16_t state; int ret, realloc_mbuf = 0; uint32_t seqn, index, flags[DPAA_TX_BURST_SIZE] = {0}; + struct rte_mbuf **orig_bufs = bufs; if (unlikely(!DPAA_PER_LCORE_PORTAL)) { ret = rte_dpaa_portal_init((void *)0); @@ -1112,6 +1112,12 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) mp = mi->pool; } + if (unlikely(RTE_MBUF_HAS_EXTBUF(mbuf))) { + rte_mbuf_refcnt_update(mbuf, 1); + bp_info = NULL; + goto indirect_buf; + } + bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp); if (unlikely(mp->ops_index != bp_info->dpaa_ops_index || realloc_mbuf == 1)) { @@ -1130,7 +1136,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) mbuf = temp_mbuf; realloc_mbuf = 0; } - +indirect_buf: state = tx_on_dpaa_pool(mbuf, bp_info, &fd_arr[loop]); if (unlikely(state)) { @@ -1157,6 +1163,15 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q); + + loop = 0; + while (loop < sent) { + if (unlikely(RTE_MBUF_HAS_EXTBUF(*orig_bufs))) + rte_pktmbuf_free(*orig_bufs); + orig_bufs++; + loop++; + } + return sent; } diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h index d9d7e04f5c..99e09196e9 100644 --- a/drivers/net/dpaa/dpaa_rxtx.h +++ b/drivers/net/dpaa/dpaa_rxtx.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright 2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2017,2020 NXP + * Copyright 2017,2020-2021 NXP * */ @@ -279,12 +279,6 @@ uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused, struct rte_mbuf **bufs __rte_unused, uint16_t nb_bufs __rte_unused); -struct rte_mbuf *dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid); - -int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, - struct qm_fd *fd, - uint32_t bpid); - uint16_t dpaa_free_mbuf(const struct qm_fd *fd); void dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr, void **bufs, int num_bufs); From patchwork Wed Jan 20 14:27:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 367045 Delivered-To: patch@linaro.org Received: by 2002:a17:906:24d5:0:0:0:0 with SMTP id f21csp227964ejb; Wed, 20 Jan 2021 06:39:22 -0800 (PST) X-Google-Smtp-Source: ABdhPJy95a5J/MJJR39sRD+fI+RiCEqWmKiXbjJe7jpIhV+o00ahwbKHLBSm2QJqDpUdb2tpD5LT X-Received: by 2002:a17:907:c2a:: with SMTP id ga42mr6109760ejc.511.1611153562024; Wed, 20 Jan 2021 06:39:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611153562; cv=none; d=google.com; s=arc-20160816; b=SrpwXtLDXKFZFvdmy0RNF5zr6ThQlZyoyN00kPUEEJNke8wuHs9OHf9TMg2J7pRsyw 3rHeHtNVrf3Q/lwKG+Ga2rLB2R9e0x4qYyZSM/NVBzhQsC8MMkmtvDYkb+FvwejIqV1u jG+a5NkCW1tGz4yRZpq15DEKUj61VBg+zaqjrh6xRENKK4wV28o7UOB6qBQD/iu7ONMl eZk6mSZUaYs4a6BakUXXD5HLGyWu82SNe8omL8pm3WJmBCNXA3+4KDKe7cmzuPTU2Nht r5PuHdKfoxnTs47udoSE5Zu0Zww93v0IPHjmVuecHjSEK342Peon1ifsj7bTVeESDEJx 6ISQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from; bh=0pGl41W6NLi7X2+okn7f7RD8yzrHO/Rgv623BSxjrUs=; b=j2yWLj4JQVM0c1pBcroq9qHHj8rS4+epOaYACxqQKWTl5ubdK1Tz318jdZP0TpeSTu hnBMWHxK03aw3a90+ZCAzemvnQrgnH4Q8xfwHiuwJZCSqfUY0cY64kPQjLK+sZMZdw2k MboFw2G54iCmdXY9S+deb0U1oGxUW6uERxA/1BfHnvsrnihiAkgDbPNgaV1jlBMWGUdA o7bs1d8WDxl6gCxACLEUwgaf6xghEayI/xppF1cieOiAzWNVxiqePlBXCMc+jD5E395x nXvb3RA4QyDfzcmEqNKfwT5FrVNX2iGSa8jQ1OJgo7Pdry3joPyMSTG9kBCRaKxTNRsb zxIQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from mails.dpdk.org (mails.dpdk.org. [217.70.189.124]) by mx.google.com with ESMTP id t22si743291ejb.452.2021.01.20.06.39.21; Wed, 20 Jan 2021 06:39:22 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) client-ip=217.70.189.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B823F140DE3; Wed, 20 Jan 2021 15:38:29 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id D9AD2140D97 for ; Wed, 20 Jan 2021 15:38:21 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id BA1501A14B4; Wed, 20 Jan 2021 15:38:21 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id E587D1A0029; Wed, 20 Jan 2021 15:38:19 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 1C8604032A; Wed, 20 Jan 2021 15:38:18 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Wed, 20 Jan 2021 19:57:21 +0530 Message-Id: <20210120142723.14090-7-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210120142723.14090-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 6/7] net/dpaa2: add traffic management driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gagandeep Singh Add basic support for scheduling and shaping on dpaa2 platform. HW supports 2 level of scheduling and shaping. However the current patch only support single level. Signed-off-by: Gagandeep Singh Acked-by: Hemant Agrawal --- drivers/net/dpaa2/dpaa2_ethdev.c | 14 +- drivers/net/dpaa2/dpaa2_ethdev.h | 5 + drivers/net/dpaa2/dpaa2_tm.c | 626 ++++++++++++++++++++++++++++ drivers/net/dpaa2/dpaa2_tm.h | 32 ++ drivers/net/dpaa2/mc/dpni.c | 313 +++++++++++++- drivers/net/dpaa2/mc/fsl_dpni.h | 210 +++++++++- drivers/net/dpaa2/mc/fsl_dpni_cmd.h | 59 ++- drivers/net/dpaa2/meson.build | 3 +- 8 files changed, 1257 insertions(+), 5 deletions(-) create mode 100644 drivers/net/dpaa2/dpaa2_tm.c create mode 100644 drivers/net/dpaa2/dpaa2_tm.h -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 5c1a12b841..cce6aa0964 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -1,7 +1,7 @@ /* * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2016-2020 NXP + * Copyright 2016-2021 NXP * */ @@ -638,6 +638,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev) if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK); + dpaa2_tm_init(dev); + return 0; } @@ -1264,6 +1266,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev) return -1; } + dpaa2_tm_deinit(dev); dpaa2_flow_clean(dev); /* Clean the device first */ ret = dpni_reset(dpni, CMD_PRI_LOW, priv->token); @@ -2345,6 +2348,14 @@ dpaa2_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, qinfo->conf.tx_deferred_start = 0; } +static int +dpaa2_tm_ops_get(struct rte_eth_dev *dev __rte_unused, void *ops) +{ + *(const void **)ops = &dpaa2_tm_ops; + + return 0; +} + static struct eth_dev_ops dpaa2_ethdev_ops = { .dev_configure = dpaa2_eth_dev_configure, .dev_start = dpaa2_dev_start, @@ -2387,6 +2398,7 @@ static struct eth_dev_ops dpaa2_ethdev_ops = { .filter_ctrl = dpaa2_dev_flow_ctrl, .rxq_info_get = dpaa2_rxq_info_get, .txq_info_get = dpaa2_txq_info_get, + .tm_ops_get = dpaa2_tm_ops_get, #if defined(RTE_LIBRTE_IEEE1588) .timesync_enable = dpaa2_timesync_enable, .timesync_disable = dpaa2_timesync_disable, diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index bb49fa9a38..9837eb62c8 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -12,6 +12,7 @@ #include #include +#include "dpaa2_tm.h" #include #include @@ -112,6 +113,8 @@ extern int dpaa2_timestamp_dynfield_offset; extern const struct rte_flow_ops dpaa2_flow_ops; extern enum rte_filter_type dpaa2_filter_type; +extern const struct rte_tm_ops dpaa2_tm_ops; + #define IP_ADDRESS_OFFSET_INVALID (-1) struct dpaa2_key_info { @@ -179,6 +182,8 @@ struct dpaa2_dev_priv { struct rte_eth_dev *eth_dev; /**< Pointer back to holding ethdev */ LIST_HEAD(, rte_flow) flows; /**< Configured flow rule handles. */ + LIST_HEAD(nodes, dpaa2_tm_node) nodes; + LIST_HEAD(shaper_profiles, dpaa2_tm_shaper_profile) shaper_profiles; }; int dpaa2_distset_to_dpkg_profile_cfg(uint64_t req_dist_set, diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c new file mode 100644 index 0000000000..841da733d5 --- /dev/null +++ b/drivers/net/dpaa2/dpaa2_tm.c @@ -0,0 +1,626 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020 NXP + */ + +#include +#include +#include + +#include "dpaa2_ethdev.h" + +#define DPAA2_BURST_MAX (64 * 1024) + +#define DPAA2_SHAPER_MIN_RATE 0 +#define DPAA2_SHAPER_MAX_RATE 107374182400ull +#define DPAA2_WEIGHT_MAX 24701 + +int +dpaa2_tm_init(struct rte_eth_dev *dev) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + + LIST_INIT(&priv->shaper_profiles); + LIST_INIT(&priv->nodes); + + return 0; +} + +void dpaa2_tm_deinit(struct rte_eth_dev *dev) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_shaper_profile *profile = + LIST_FIRST(&priv->shaper_profiles); + struct dpaa2_tm_node *node = LIST_FIRST(&priv->nodes); + + while (profile) { + struct dpaa2_tm_shaper_profile *next = LIST_NEXT(profile, next); + + LIST_REMOVE(profile, next); + rte_free(profile); + profile = next; + } + + while (node) { + struct dpaa2_tm_node *next = LIST_NEXT(node, next); + + LIST_REMOVE(node, next); + rte_free(node); + node = next; + } +} + +static struct dpaa2_tm_node * +dpaa2_node_from_id(struct dpaa2_dev_priv *priv, uint32_t node_id) +{ + struct dpaa2_tm_node *node; + + LIST_FOREACH(node, &priv->nodes, next) + if (node->id == node_id) + return node; + + return NULL; +} + +static int +dpaa2_capabilities_get(struct rte_eth_dev *dev, + struct rte_tm_capabilities *cap, + struct rte_tm_error *error) +{ + if (!cap) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, "Capabilities are NULL\n"); + + memset(cap, 0, sizeof(*cap)); + + /* root node(port) + txqs number, assuming each TX + * Queue is mapped to each TC + */ + cap->n_nodes_max = 1 + dev->data->nb_tx_queues; + cap->n_levels_max = 2; /* port level + txqs level */ + cap->non_leaf_nodes_identical = 1; + cap->leaf_nodes_identical = 1; + + cap->shaper_n_max = 1; + cap->shaper_private_n_max = 1; + cap->shaper_private_dual_rate_n_max = 1; + cap->shaper_private_rate_min = DPAA2_SHAPER_MIN_RATE; + cap->shaper_private_rate_max = DPAA2_SHAPER_MAX_RATE; + + cap->sched_n_children_max = dev->data->nb_tx_queues; + cap->sched_sp_n_priorities_max = dev->data->nb_tx_queues; + cap->sched_wfq_n_children_per_group_max = dev->data->nb_tx_queues; + cap->sched_wfq_n_groups_max = 2; + cap->sched_wfq_weight_max = DPAA2_WEIGHT_MAX; + + cap->dynamic_update_mask = RTE_TM_UPDATE_NODE_STATS; + cap->stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES; + + return 0; +} + +static int +dpaa2_level_capabilities_get(struct rte_eth_dev *dev, + uint32_t level_id, + struct rte_tm_level_capabilities *cap, + struct rte_tm_error *error) +{ + if (!cap) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + + memset(cap, 0, sizeof(*cap)); + + if (level_id > 1) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_LEVEL_ID, + NULL, "Wrong level id\n"); + + if (level_id == 0) { /* Root node */ + cap->n_nodes_max = 1; + cap->n_nodes_nonleaf_max = 1; + cap->non_leaf_nodes_identical = 1; + + cap->nonleaf.shaper_private_supported = 1; + cap->nonleaf.shaper_private_dual_rate_supported = 1; + cap->nonleaf.shaper_private_rate_min = DPAA2_SHAPER_MIN_RATE; + cap->nonleaf.shaper_private_rate_max = DPAA2_SHAPER_MAX_RATE; + + cap->nonleaf.sched_n_children_max = dev->data->nb_tx_queues; + cap->nonleaf.sched_sp_n_priorities_max = 1; + cap->nonleaf.sched_wfq_n_children_per_group_max = + dev->data->nb_tx_queues; + cap->nonleaf.sched_wfq_n_groups_max = 2; + cap->nonleaf.sched_wfq_weight_max = DPAA2_WEIGHT_MAX; + cap->nonleaf.stats_mask = RTE_TM_STATS_N_PKTS | + RTE_TM_STATS_N_BYTES; + } else { /* leaf nodes */ + cap->n_nodes_max = dev->data->nb_tx_queues; + cap->n_nodes_leaf_max = dev->data->nb_tx_queues; + cap->leaf_nodes_identical = 1; + + cap->leaf.stats_mask = RTE_TM_STATS_N_PKTS; + } + + return 0; +} + +static int +dpaa2_node_capabilities_get(struct rte_eth_dev *dev, uint32_t node_id, + struct rte_tm_node_capabilities *cap, + struct rte_tm_error *error) +{ + struct dpaa2_tm_node *node; + struct dpaa2_dev_priv *priv = dev->data->dev_private; + + if (!cap) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + + memset(cap, 0, sizeof(*cap)); + + node = dpaa2_node_from_id(priv, node_id); + if (!node) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, "Node id does not exist\n"); + + if (node->type == 0) { + cap->shaper_private_supported = 1; + + cap->nonleaf.sched_n_children_max = dev->data->nb_tx_queues; + cap->nonleaf.sched_sp_n_priorities_max = 1; + cap->nonleaf.sched_wfq_n_children_per_group_max = + dev->data->nb_tx_queues; + cap->nonleaf.sched_wfq_n_groups_max = 2; + cap->nonleaf.sched_wfq_weight_max = DPAA2_WEIGHT_MAX; + cap->stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES; + } else { + cap->stats_mask = RTE_TM_STATS_N_PKTS; + } + + return 0; +} + +static int +dpaa2_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, int *is_leaf, + struct rte_tm_error *error) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_node *node; + + if (!is_leaf) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + + node = dpaa2_node_from_id(priv, node_id); + if (!node) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, "Node id does not exist\n"); + + *is_leaf = node->type == 1/*NODE_QUEUE*/ ? 1 : 0; + + return 0; +} + +static struct dpaa2_tm_shaper_profile * +dpaa2_shaper_profile_from_id(struct dpaa2_dev_priv *priv, + uint32_t shaper_profile_id) +{ + struct dpaa2_tm_shaper_profile *profile; + + LIST_FOREACH(profile, &priv->shaper_profiles, next) + if (profile->id == shaper_profile_id) + return profile; + + return NULL; +} + +static int +dpaa2_shaper_profile_add(struct rte_eth_dev *dev, uint32_t shaper_profile_id, + struct rte_tm_shaper_params *params, + struct rte_tm_error *error) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_shaper_profile *profile; + + if (!params) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + if (params->committed.rate > DPAA2_SHAPER_MAX_RATE) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_RATE, + NULL, "comitted rate is out of range\n"); + + if (params->committed.size > DPAA2_BURST_MAX) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE, + NULL, "committed size is out of range\n"); + + if (params->peak.rate > DPAA2_SHAPER_MAX_RATE) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_RATE, + NULL, "Peak rate is out of range\n"); + + if (params->peak.size > DPAA2_BURST_MAX) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE, + NULL, "Peak size is out of range\n"); + + if (shaper_profile_id == RTE_TM_SHAPER_PROFILE_ID_NONE) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, "Wrong shaper profile id\n"); + + profile = dpaa2_shaper_profile_from_id(priv, shaper_profile_id); + if (profile) + return -rte_tm_error_set(error, EEXIST, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, "Profile id already exists\n"); + + profile = rte_zmalloc_socket(NULL, sizeof(*profile), 0, + rte_socket_id()); + if (!profile) + return -rte_tm_error_set(error, ENOMEM, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + + profile->id = shaper_profile_id; + rte_memcpy(&profile->params, params, sizeof(profile->params)); + + LIST_INSERT_HEAD(&priv->shaper_profiles, profile, next); + + return 0; +} + +static int +dpaa2_shaper_profile_delete(struct rte_eth_dev *dev, uint32_t shaper_profile_id, + struct rte_tm_error *error) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_shaper_profile *profile; + + profile = dpaa2_shaper_profile_from_id(priv, shaper_profile_id); + if (!profile) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, "Profile id does not exist\n"); + + if (profile->refcnt) + return -rte_tm_error_set(error, EPERM, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, "Profile is used\n"); + + LIST_REMOVE(profile, next); + rte_free(profile); + + return 0; +} + +static int +dpaa2_node_check_params(struct rte_eth_dev *dev, uint32_t node_id, + __rte_unused uint32_t priority, uint32_t weight, + uint32_t level_id, + struct rte_tm_node_params *params, + struct rte_tm_error *error) +{ + if (node_id == RTE_TM_NODE_ID_NULL) + return -rte_tm_error_set(error, EINVAL, RTE_TM_NODE_ID_NULL, + NULL, "Node id is invalid\n"); + + if (weight > DPAA2_WEIGHT_MAX) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_WEIGHT, + NULL, "Weight is out of range\n"); + + if (level_id != 0 && level_id != 1) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_LEVEL_ID, + NULL, "Wrong level id\n"); + + if (!params) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + + if (params->shared_shaper_id) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID, + NULL, "Shared shaper is not supported\n"); + + if (params->n_shared_shapers) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS, + NULL, "Shared shaper is not supported\n"); + + /* verify port (root node) settings */ + if (node_id >= dev->data->nb_tx_queues) { + if (params->nonleaf.wfq_weight_mode) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE, + NULL, "WFQ weight mode is not supported\n"); + + if (params->stats_mask & ~(RTE_TM_STATS_N_PKTS | + RTE_TM_STATS_N_BYTES)) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS_STATS, + NULL, + "Requested port stats are not supported\n"); + + return 0; + } + if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID, + NULL, "Private shaper not supported on leaf\n"); + + if (params->stats_mask & ~RTE_TM_STATS_N_PKTS) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARAMS_STATS, + NULL, + "Requested stats are not supported\n"); + + /* check leaf node */ + if (level_id == 1) { + if (params->leaf.cman != RTE_TM_CMAN_TAIL_DROP) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN, + NULL, "Only taildrop is supported\n"); + } + + return 0; +} + +static int +dpaa2_node_add(struct rte_eth_dev *dev, uint32_t node_id, + uint32_t parent_node_id, uint32_t priority, uint32_t weight, + uint32_t level_id, struct rte_tm_node_params *params, + struct rte_tm_error *error) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_shaper_profile *profile = NULL; + struct dpaa2_tm_node *node, *parent = NULL; + int ret; + + if (0/* If device is started*/) + return -rte_tm_error_set(error, EPERM, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, "Port is already started\n"); + + ret = dpaa2_node_check_params(dev, node_id, priority, weight, level_id, + params, error); + if (ret) + return ret; + + if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) { + profile = dpaa2_shaper_profile_from_id(priv, + params->shaper_profile_id); + if (!profile) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID, + NULL, "Shaper id does not exist\n"); + } + if (parent_node_id == RTE_TM_NODE_ID_NULL) { + LIST_FOREACH(node, &priv->nodes, next) { + if (node->type != 0 /*root node*/) + continue; + + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, "Root node exists\n"); + } + } else { + parent = dpaa2_node_from_id(priv, parent_node_id); + if (!parent) + return -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID, + NULL, "Parent node id not exist\n"); + } + + node = dpaa2_node_from_id(priv, node_id); + if (node) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, "Node id already exists\n"); + + node = rte_zmalloc_socket(NULL, sizeof(*node), 0, rte_socket_id()); + if (!node) + return -rte_tm_error_set(error, ENOMEM, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + + node->id = node_id; + node->type = parent_node_id == RTE_TM_NODE_ID_NULL ? 0/*NODE_PORT*/ : + 1/*NODE_QUEUE*/; + + if (parent) { + node->parent = parent; + parent->refcnt++; + } + + if (profile) { + node->profile = profile; + profile->refcnt++; + } + + node->weight = weight; + node->priority = priority; + node->stats_mask = params->stats_mask; + + LIST_INSERT_HEAD(&priv->nodes, node, next); + + return 0; +} + +static int +dpaa2_node_delete(struct rte_eth_dev *dev, uint32_t node_id, + struct rte_tm_error *error) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_node *node; + + if (0) { + return -rte_tm_error_set(error, EPERM, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, "Port is already started\n"); + } + + node = dpaa2_node_from_id(priv, node_id); + if (!node) + return -rte_tm_error_set(error, ENODEV, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, "Node id does not exist\n"); + + if (node->refcnt) + return -rte_tm_error_set(error, EPERM, + RTE_TM_ERROR_TYPE_NODE_ID, + NULL, "Node id is used\n"); + + if (node->parent) + node->parent->refcnt--; + + if (node->profile) + node->profile->refcnt--; + + LIST_REMOVE(node, next); + rte_free(node); + + return 0; +} + +static int +dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail, + struct rte_tm_error *error) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_tm_node *node, *temp_node; + struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private; + int ret; + int wfq_grp = 0, is_wfq_grp = 0, conf[DPNI_MAX_TC]; + struct dpni_tx_priorities_cfg prio_cfg; + + memset(&prio_cfg, 0, sizeof(prio_cfg)); + memset(conf, 0, sizeof(conf)); + + LIST_FOREACH(node, &priv->nodes, next) { + if (node->type == 0/*root node*/) { + if (!node->profile) + continue; + + struct dpni_tx_shaping_cfg tx_cr_shaper, tx_er_shaper; + + tx_cr_shaper.max_burst_size = + node->profile->params.committed.size; + tx_cr_shaper.rate_limit = + node->profile->params.committed.rate / (1024 * 1024); + tx_er_shaper.max_burst_size = + node->profile->params.peak.size; + tx_er_shaper.rate_limit = + node->profile->params.peak.rate / (1024 * 1024); + ret = dpni_set_tx_shaping(dpni, 0, priv->token, + &tx_cr_shaper, &tx_er_shaper, 0); + if (ret) { + ret = -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_SHAPER_PROFILE, NULL, + "Error in setting Shaping\n"); + goto out; + } + + continue; + } else { /* level 1, all leaf nodes */ + if (node->id >= dev->data->nb_tx_queues) { + ret = -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_NODE_ID, NULL, + "Not enough txqs configured\n"); + goto out; + } + + if (conf[node->id]) + continue; + + LIST_FOREACH(temp_node, &priv->nodes, next) { + if (temp_node->id == node->id || + temp_node->type == 0) + continue; + if (conf[temp_node->id]) + continue; + if (node->priority == temp_node->priority) { + if (wfq_grp == 0) { + prio_cfg.tc_sched[temp_node->id].mode = + DPNI_TX_SCHED_WEIGHTED_A; + /* DPDK support lowest weight 1 and DPAA2 platform 100 */ + prio_cfg.tc_sched[temp_node->id].delta_bandwidth = + temp_node->weight + 99; + } else if (wfq_grp == 1) { + prio_cfg.tc_sched[temp_node->id].mode = + DPNI_TX_SCHED_WEIGHTED_B; + prio_cfg.tc_sched[temp_node->id].delta_bandwidth = + temp_node->weight + 99; + } else { + /*TODO: add one more check for number of nodes in a group */ + ret = -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, NULL, + "Only 2 WFQ Groups are supported\n"); + goto out; + } + conf[temp_node->id] = 1; + is_wfq_grp = 1; + } + } + if (is_wfq_grp) { + if (wfq_grp == 0) { + prio_cfg.tc_sched[node->id].mode = + DPNI_TX_SCHED_WEIGHTED_A; + prio_cfg.tc_sched[node->id].delta_bandwidth = + node->weight + 99; + prio_cfg.prio_group_A = node->priority; + } else if (wfq_grp == 1) { + prio_cfg.tc_sched[node->id].mode = + DPNI_TX_SCHED_WEIGHTED_B; + prio_cfg.tc_sched[node->id].delta_bandwidth = + node->weight + 99; + prio_cfg.prio_group_B = node->priority; + } + wfq_grp++; + is_wfq_grp = 0; + } + conf[node->id] = 1; + } + if (wfq_grp) + prio_cfg.separate_groups = 1; + } + ret = dpni_set_tx_priorities(dpni, 0, priv->token, &prio_cfg); + if (ret) { + ret = -rte_tm_error_set(error, EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, NULL, + "Scheduling Failed\n"); + goto out; + } + + return 0; + +out: + if (clear_on_fail) { + dpaa2_tm_deinit(dev); + dpaa2_tm_init(dev); + } + + return ret; +} + +const struct rte_tm_ops dpaa2_tm_ops = { + .node_type_get = dpaa2_node_type_get, + .capabilities_get = dpaa2_capabilities_get, + .level_capabilities_get = dpaa2_level_capabilities_get, + .node_capabilities_get = dpaa2_node_capabilities_get, + .shaper_profile_add = dpaa2_shaper_profile_add, + .shaper_profile_delete = dpaa2_shaper_profile_delete, + .node_add = dpaa2_node_add, + .node_delete = dpaa2_node_delete, + .hierarchy_commit = dpaa2_hierarchy_commit, +}; diff --git a/drivers/net/dpaa2/dpaa2_tm.h b/drivers/net/dpaa2/dpaa2_tm.h new file mode 100644 index 0000000000..6632fab687 --- /dev/null +++ b/drivers/net/dpaa2/dpaa2_tm.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2020 NXP + */ + +#ifndef _DPAA2_TM_H_ +#define _DPAA2_TM_H_ + +#include + +struct dpaa2_tm_shaper_profile { + LIST_ENTRY(dpaa2_tm_shaper_profile) next; + uint32_t id; + int refcnt; + struct rte_tm_shaper_params params; +}; + +struct dpaa2_tm_node { + LIST_ENTRY(dpaa2_tm_node) next; + uint32_t id; + uint32_t type; + int refcnt; + struct dpaa2_tm_node *parent; + struct dpaa2_tm_shaper_profile *profile; + uint32_t weight; + uint32_t priority; + uint64_t stats_mask; +}; + +int dpaa2_tm_init(struct rte_eth_dev *dev); +void dpaa2_tm_deinit(struct rte_eth_dev *dev); + +#endif /* _DPAA2_TM_H_ */ diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c index 683d7bcc17..b254931386 100644 --- a/drivers/net/dpaa2/mc/dpni.c +++ b/drivers/net/dpaa2/mc/dpni.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) * * Copyright 2013-2016 Freescale Semiconductor Inc. - * Copyright 2016-2019 NXP + * Copyright 2016-2020 NXP * */ #include @@ -949,6 +949,46 @@ int dpni_get_link_state(struct fsl_mc_io *mc_io, return 0; } +/** + * dpni_set_tx_shaping() - Set the transmit shaping + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @tx_cr_shaper: TX committed rate shaping configuration + * @tx_er_shaper: TX excess rate shaping configuration + * @coupled: Committed and excess rate shapers are coupled + * + * Return: '0' on Success; Error code otherwise. + */ +int dpni_set_tx_shaping(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + const struct dpni_tx_shaping_cfg *tx_cr_shaper, + const struct dpni_tx_shaping_cfg *tx_er_shaper, + int coupled) +{ + struct dpni_cmd_set_tx_shaping *cmd_params; + struct mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_SHAPING, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_set_tx_shaping *)cmd.params; + cmd_params->tx_cr_max_burst_size = + cpu_to_le16(tx_cr_shaper->max_burst_size); + cmd_params->tx_er_max_burst_size = + cpu_to_le16(tx_er_shaper->max_burst_size); + cmd_params->tx_cr_rate_limit = + cpu_to_le32(tx_cr_shaper->rate_limit); + cmd_params->tx_er_rate_limit = + cpu_to_le32(tx_er_shaper->rate_limit); + dpni_set_field(cmd_params->coupled, COUPLED, coupled); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + /** * dpni_set_max_frame_length() - Set the maximum received frame length. * @mc_io: Pointer to MC portal's I/O object @@ -1476,6 +1516,55 @@ int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io, return mc_send_command(mc_io, &cmd); } +/** + * dpni_set_tx_priorities() - Set transmission TC priority configuration + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @cfg: Transmission selection configuration + * + * warning: Allowed only when DPNI is disabled + * + * Return: '0' on Success; Error code otherwise. + */ +int dpni_set_tx_priorities(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + const struct dpni_tx_priorities_cfg *cfg) +{ + struct dpni_cmd_set_tx_priorities *cmd_params; + struct mc_command cmd = { 0 }; + int i; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_PRIORITIES, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_set_tx_priorities *)cmd.params; + dpni_set_field(cmd_params->flags, + SEPARATE_GRP, + cfg->separate_groups); + cmd_params->prio_group_A = cfg->prio_group_A; + cmd_params->prio_group_B = cfg->prio_group_B; + + for (i = 0; i + 1 < DPNI_MAX_TC; i = i + 2) { + dpni_set_field(cmd_params->modes[i / 2], + MODE_1, + cfg->tc_sched[i].mode); + dpni_set_field(cmd_params->modes[i / 2], + MODE_2, + cfg->tc_sched[i + 1].mode); + } + + for (i = 0; i < DPNI_MAX_TC; i++) { + cmd_params->delta_bandwidth[i] = + cpu_to_le16(cfg->tc_sched[i].delta_bandwidth); + } + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + /** * dpni_set_rx_tc_dist() - Set Rx traffic class distribution configuration * @mc_io: Pointer to MC portal's I/O object @@ -1808,6 +1897,228 @@ int dpni_clear_fs_entries(struct fsl_mc_io *mc_io, return mc_send_command(mc_io, &cmd); } +/** + * dpni_set_rx_tc_policing() - Set Rx traffic class policing configuration + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @tc_id: Traffic class selection (0-7) + * @cfg: Traffic class policing configuration + * + * Return: '0' on Success; error code otherwise. + */ +int dpni_set_rx_tc_policing(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + uint8_t tc_id, + const struct dpni_rx_tc_policing_cfg *cfg) +{ + struct dpni_cmd_set_rx_tc_policing *cmd_params; + struct mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_TC_POLICING, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_set_rx_tc_policing *)cmd.params; + dpni_set_field(cmd_params->mode_color, COLOR, cfg->default_color); + dpni_set_field(cmd_params->mode_color, MODE, cfg->mode); + dpni_set_field(cmd_params->units, UNITS, cfg->units); + cmd_params->options = cpu_to_le32(cfg->options); + cmd_params->cir = cpu_to_le32(cfg->cir); + cmd_params->cbs = cpu_to_le32(cfg->cbs); + cmd_params->eir = cpu_to_le32(cfg->eir); + cmd_params->ebs = cpu_to_le32(cfg->ebs); + cmd_params->tc_id = tc_id; + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + +/** + * dpni_get_rx_tc_policing() - Get Rx traffic class policing configuration + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @tc_id: Traffic class selection (0-7) + * @cfg: Traffic class policing configuration + * + * Return: '0' on Success; error code otherwise. + */ +int dpni_get_rx_tc_policing(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + uint8_t tc_id, + struct dpni_rx_tc_policing_cfg *cfg) +{ + struct dpni_rsp_get_rx_tc_policing *rsp_params; + struct dpni_cmd_get_rx_tc_policing *cmd_params; + struct mc_command cmd = { 0 }; + int err; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_TC_POLICING, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_get_rx_tc_policing *)cmd.params; + cmd_params->tc_id = tc_id; + + + /* send command to mc*/ + err = mc_send_command(mc_io, &cmd); + if (err) + return err; + + rsp_params = (struct dpni_rsp_get_rx_tc_policing *)cmd.params; + cfg->options = le32_to_cpu(rsp_params->options); + cfg->cir = le32_to_cpu(rsp_params->cir); + cfg->cbs = le32_to_cpu(rsp_params->cbs); + cfg->eir = le32_to_cpu(rsp_params->eir); + cfg->ebs = le32_to_cpu(rsp_params->ebs); + cfg->units = dpni_get_field(rsp_params->units, UNITS); + cfg->mode = dpni_get_field(rsp_params->mode_color, MODE); + cfg->default_color = dpni_get_field(rsp_params->mode_color, COLOR); + + return 0; +} + +/** + * dpni_prepare_early_drop() - prepare an early drop. + * @cfg: Early-drop configuration + * @early_drop_buf: Zeroed 256 bytes of memory before mapping it to DMA + * + * This function has to be called before dpni_set_rx_tc_early_drop or + * dpni_set_tx_tc_early_drop + * + */ +void dpni_prepare_early_drop(const struct dpni_early_drop_cfg *cfg, + uint8_t *early_drop_buf) +{ + struct dpni_early_drop *ext_params; + + ext_params = (struct dpni_early_drop *)early_drop_buf; + + dpni_set_field(ext_params->flags, DROP_ENABLE, cfg->enable); + dpni_set_field(ext_params->flags, DROP_UNITS, cfg->units); + ext_params->green_drop_probability = cfg->green.drop_probability; + ext_params->green_max_threshold = cpu_to_le64(cfg->green.max_threshold); + ext_params->green_min_threshold = cpu_to_le64(cfg->green.min_threshold); + ext_params->yellow_drop_probability = cfg->yellow.drop_probability; + ext_params->yellow_max_threshold = + cpu_to_le64(cfg->yellow.max_threshold); + ext_params->yellow_min_threshold = + cpu_to_le64(cfg->yellow.min_threshold); + ext_params->red_drop_probability = cfg->red.drop_probability; + ext_params->red_max_threshold = cpu_to_le64(cfg->red.max_threshold); + ext_params->red_min_threshold = cpu_to_le64(cfg->red.min_threshold); +} + +/** + * dpni_extract_early_drop() - extract the early drop configuration. + * @cfg: Early-drop configuration + * @early_drop_buf: Zeroed 256 bytes of memory before mapping it to DMA + * + * This function has to be called after dpni_get_rx_tc_early_drop or + * dpni_get_tx_tc_early_drop + * + */ +void dpni_extract_early_drop(struct dpni_early_drop_cfg *cfg, + const uint8_t *early_drop_buf) +{ + const struct dpni_early_drop *ext_params; + + ext_params = (const struct dpni_early_drop *)early_drop_buf; + + cfg->enable = dpni_get_field(ext_params->flags, DROP_ENABLE); + cfg->units = dpni_get_field(ext_params->flags, DROP_UNITS); + cfg->green.drop_probability = ext_params->green_drop_probability; + cfg->green.max_threshold = le64_to_cpu(ext_params->green_max_threshold); + cfg->green.min_threshold = le64_to_cpu(ext_params->green_min_threshold); + cfg->yellow.drop_probability = ext_params->yellow_drop_probability; + cfg->yellow.max_threshold = + le64_to_cpu(ext_params->yellow_max_threshold); + cfg->yellow.min_threshold = + le64_to_cpu(ext_params->yellow_min_threshold); + cfg->red.drop_probability = ext_params->red_drop_probability; + cfg->red.max_threshold = le64_to_cpu(ext_params->red_max_threshold); + cfg->red.min_threshold = le64_to_cpu(ext_params->red_min_threshold); +} + +/** + * dpni_set_early_drop() - Set traffic class early-drop configuration + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @qtype: Type of queue - only Rx and Tx types are supported + * @tc_id: Traffic class selection (0-7) + * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory filled + * with the early-drop configuration by calling dpni_prepare_early_drop() + * + * warning: Before calling this function, call dpni_prepare_early_drop() to + * prepare the early_drop_iova parameter + * + * Return: '0' on Success; error code otherwise. + */ +int dpni_set_early_drop(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + enum dpni_queue_type qtype, + uint8_t tc_id, + uint64_t early_drop_iova) +{ + struct dpni_cmd_early_drop *cmd_params; + struct mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_EARLY_DROP, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_early_drop *)cmd.params; + cmd_params->qtype = qtype; + cmd_params->tc = tc_id; + cmd_params->early_drop_iova = cpu_to_le64(early_drop_iova); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + +/** + * dpni_get_early_drop() - Get Rx traffic class early-drop configuration + * @mc_io: Pointer to MC portal's I/O object + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' + * @token: Token of DPNI object + * @qtype: Type of queue - only Rx and Tx types are supported + * @tc_id: Traffic class selection (0-7) + * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory + * + * warning: After calling this function, call dpni_extract_early_drop() to + * get the early drop configuration + * + * Return: '0' on Success; error code otherwise. + */ +int dpni_get_early_drop(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + enum dpni_queue_type qtype, + uint8_t tc_id, + uint64_t early_drop_iova) +{ + struct dpni_cmd_early_drop *cmd_params; + struct mc_command cmd = { 0 }; + + /* prepare command */ + cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_EARLY_DROP, + cmd_flags, + token); + cmd_params = (struct dpni_cmd_early_drop *)cmd.params; + cmd_params->qtype = qtype; + cmd_params->tc = tc_id; + cmd_params->early_drop_iova = cpu_to_le64(early_drop_iova); + + /* send command to mc*/ + return mc_send_command(mc_io, &cmd); +} + /** * dpni_set_congestion_notification() - Set traffic class congestion * notification configuration diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h index 598911ddd1..df42746c9a 100644 --- a/drivers/net/dpaa2/mc/fsl_dpni.h +++ b/drivers/net/dpaa2/mc/fsl_dpni.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) * * Copyright 2013-2016 Freescale Semiconductor Inc. - * Copyright 2016-2019 NXP + * Copyright 2016-2020 NXP * */ #ifndef __FSL_DPNI_H @@ -731,6 +731,23 @@ int dpni_get_link_state(struct fsl_mc_io *mc_io, uint16_t token, struct dpni_link_state *state); +/** + * struct dpni_tx_shaping - Structure representing DPNI tx shaping configuration + * @rate_limit: Rate in Mbps + * @max_burst_size: Burst size in bytes (up to 64KB) + */ +struct dpni_tx_shaping_cfg { + uint32_t rate_limit; + uint16_t max_burst_size; +}; + +int dpni_set_tx_shaping(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + const struct dpni_tx_shaping_cfg *tx_cr_shaper, + const struct dpni_tx_shaping_cfg *tx_er_shaper, + int coupled); + int dpni_set_max_frame_length(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, @@ -832,6 +849,49 @@ int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token); +/** + * enum dpni_tx_schedule_mode - DPNI Tx scheduling mode + * @DPNI_TX_SCHED_STRICT_PRIORITY: strict priority + * @DPNI_TX_SCHED_WEIGHTED_A: weighted based scheduling in group A + * @DPNI_TX_SCHED_WEIGHTED_B: weighted based scheduling in group B + */ +enum dpni_tx_schedule_mode { + DPNI_TX_SCHED_STRICT_PRIORITY = 0, + DPNI_TX_SCHED_WEIGHTED_A, + DPNI_TX_SCHED_WEIGHTED_B, +}; + +/** + * struct dpni_tx_schedule_cfg - Structure representing Tx scheduling conf + * @mode: Scheduling mode + * @delta_bandwidth: Bandwidth represented in weights from 100 to 10000; + * not applicable for 'strict-priority' mode; + */ +struct dpni_tx_schedule_cfg { + enum dpni_tx_schedule_mode mode; + uint16_t delta_bandwidth; +}; + +/** + * struct dpni_tx_priorities_cfg - Structure representing transmission + * priorities for DPNI TCs + * @tc_sched: An array of traffic-classes + * @prio_group_A: Priority of group A + * @prio_group_B: Priority of group B + * @separate_groups: Treat A and B groups as separate + */ +struct dpni_tx_priorities_cfg { + struct dpni_tx_schedule_cfg tc_sched[DPNI_MAX_TC]; + uint32_t prio_group_A; + uint32_t prio_group_B; + uint8_t separate_groups; +}; + +int dpni_set_tx_priorities(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + const struct dpni_tx_priorities_cfg *cfg); + /** * enum dpni_dist_mode - DPNI distribution mode * @DPNI_DIST_MODE_NONE: No distribution @@ -904,6 +964,90 @@ int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io, uint8_t tc_id, const struct dpni_rx_tc_dist_cfg *cfg); +/** + * Set to select color aware mode (otherwise - color blind) + */ +#define DPNI_POLICER_OPT_COLOR_AWARE 0x00000001 +/** + * Set to discard frame with RED color + */ +#define DPNI_POLICER_OPT_DISCARD_RED 0x00000002 + +/** + * enum dpni_policer_mode - selecting the policer mode + * @DPNI_POLICER_MODE_NONE: Policer is disabled + * @DPNI_POLICER_MODE_PASS_THROUGH: Policer pass through + * @DPNI_POLICER_MODE_RFC_2698: Policer algorithm RFC 2698 + * @DPNI_POLICER_MODE_RFC_4115: Policer algorithm RFC 4115 + */ +enum dpni_policer_mode { + DPNI_POLICER_MODE_NONE = 0, + DPNI_POLICER_MODE_PASS_THROUGH, + DPNI_POLICER_MODE_RFC_2698, + DPNI_POLICER_MODE_RFC_4115 +}; + +/** + * enum dpni_policer_unit - DPNI policer units + * @DPNI_POLICER_UNIT_BYTES: bytes units + * @DPNI_POLICER_UNIT_FRAMES: frames units + */ +enum dpni_policer_unit { + DPNI_POLICER_UNIT_BYTES = 0, + DPNI_POLICER_UNIT_FRAMES +}; + +/** + * enum dpni_policer_color - selecting the policer color + * @DPNI_POLICER_COLOR_GREEN: Green color + * @DPNI_POLICER_COLOR_YELLOW: Yellow color + * @DPNI_POLICER_COLOR_RED: Red color + */ +enum dpni_policer_color { + DPNI_POLICER_COLOR_GREEN = 0, + DPNI_POLICER_COLOR_YELLOW, + DPNI_POLICER_COLOR_RED +}; + +/** + * struct dpni_rx_tc_policing_cfg - Policer configuration + * @options: Mask of available options; use 'DPNI_POLICER_OPT_' values + * @mode: policer mode + * @default_color: For pass-through mode the policer re-colors with this + * color any incoming packets. For Color aware non-pass-through mode: + * policer re-colors with this color all packets with FD[DROPP]>2. + * @units: Bytes or Packets + * @cir: Committed information rate (CIR) in Kbps or packets/second + * @cbs: Committed burst size (CBS) in bytes or packets + * @eir: Peak information rate (PIR, rfc2698) in Kbps or packets/second + * Excess information rate (EIR, rfc4115) in Kbps or packets/second + * @ebs: Peak burst size (PBS, rfc2698) in bytes or packets + * Excess burst size (EBS, rfc4115) in bytes or packets + */ +struct dpni_rx_tc_policing_cfg { + uint32_t options; + enum dpni_policer_mode mode; + enum dpni_policer_unit units; + enum dpni_policer_color default_color; + uint32_t cir; + uint32_t cbs; + uint32_t eir; + uint32_t ebs; +}; + + +int dpni_set_rx_tc_policing(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + uint8_t tc_id, + const struct dpni_rx_tc_policing_cfg *cfg); + +int dpni_get_rx_tc_policing(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + uint8_t tc_id, + struct dpni_rx_tc_policing_cfg *cfg); + /** * enum dpni_congestion_unit - DPNI congestion units * @DPNI_CONGESTION_UNIT_BYTES: bytes units @@ -914,6 +1058,70 @@ enum dpni_congestion_unit { DPNI_CONGESTION_UNIT_FRAMES }; +/** + * enum dpni_early_drop_mode - DPNI early drop mode + * @DPNI_EARLY_DROP_MODE_NONE: early drop is disabled + * @DPNI_EARLY_DROP_MODE_TAIL: early drop in taildrop mode + * @DPNI_EARLY_DROP_MODE_WRED: early drop in WRED mode + */ +enum dpni_early_drop_mode { + DPNI_EARLY_DROP_MODE_NONE = 0, + DPNI_EARLY_DROP_MODE_TAIL, + DPNI_EARLY_DROP_MODE_WRED +}; + +/** + * struct dpni_wred_cfg - WRED configuration + * @max_threshold: maximum threshold that packets may be discarded. Above this + * threshold all packets are discarded; must be less than 2^39; + * approximated to be expressed as (x+256)*2^(y-1) due to HW + * implementation. + * @min_threshold: minimum threshold that packets may be discarded at + * @drop_probability: probability that a packet will be discarded (1-100, + * associated with the max_threshold). + */ +struct dpni_wred_cfg { + uint64_t max_threshold; + uint64_t min_threshold; + uint8_t drop_probability; +}; + +/** + * struct dpni_early_drop_cfg - early-drop configuration + * @enable: drop enable + * @units: units type + * @green: WRED - 'green' configuration + * @yellow: WRED - 'yellow' configuration + * @red: WRED - 'red' configuration + */ +struct dpni_early_drop_cfg { + uint8_t enable; + enum dpni_congestion_unit units; + struct dpni_wred_cfg green; + struct dpni_wred_cfg yellow; + struct dpni_wred_cfg red; +}; + +void dpni_prepare_early_drop(const struct dpni_early_drop_cfg *cfg, + uint8_t *early_drop_buf); + +void dpni_extract_early_drop(struct dpni_early_drop_cfg *cfg, + const uint8_t *early_drop_buf); + +int dpni_set_early_drop(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + enum dpni_queue_type qtype, + uint8_t tc_id, + uint64_t early_drop_iova); + +int dpni_get_early_drop(struct fsl_mc_io *mc_io, + uint32_t cmd_flags, + uint16_t token, + enum dpni_queue_type qtype, + uint8_t tc_id, + uint64_t early_drop_iova); + /** * enum dpni_dest - DPNI destination types * @DPNI_DEST_NONE: Unassigned destination; The queue is set in parked mode and diff --git a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h index 9e7376200d..c40090b8fe 100644 --- a/drivers/net/dpaa2/mc/fsl_dpni_cmd.h +++ b/drivers/net/dpaa2/mc/fsl_dpni_cmd.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) * * Copyright 2013-2016 Freescale Semiconductor Inc. - * Copyright 2016-2019 NXP + * Copyright 2016-2020 NXP * */ #ifndef _FSL_DPNI_CMD_H @@ -69,6 +69,8 @@ #define DPNI_CMDID_SET_RX_TC_DIST DPNI_CMD_V3(0x235) +#define DPNI_CMDID_SET_RX_TC_POLICING DPNI_CMD(0x23E) + #define DPNI_CMDID_SET_QOS_TBL DPNI_CMD_V2(0x240) #define DPNI_CMDID_ADD_QOS_ENT DPNI_CMD_V2(0x241) #define DPNI_CMDID_REMOVE_QOS_ENT DPNI_CMD(0x242) @@ -77,6 +79,9 @@ #define DPNI_CMDID_REMOVE_FS_ENT DPNI_CMD(0x245) #define DPNI_CMDID_CLR_FS_ENT DPNI_CMD(0x246) +#define DPNI_CMDID_SET_TX_PRIORITIES DPNI_CMD_V2(0x250) +#define DPNI_CMDID_GET_RX_TC_POLICING DPNI_CMD(0x251) + #define DPNI_CMDID_GET_STATISTICS DPNI_CMD_V3(0x25D) #define DPNI_CMDID_RESET_STATISTICS DPNI_CMD(0x25E) #define DPNI_CMDID_GET_QUEUE DPNI_CMD_V2(0x25F) @@ -354,6 +359,19 @@ struct dpni_rsp_get_link_state { uint64_t advertising; }; +#define DPNI_COUPLED_SHIFT 0 +#define DPNI_COUPLED_SIZE 1 + +struct dpni_cmd_set_tx_shaping { + uint16_t tx_cr_max_burst_size; + uint16_t tx_er_max_burst_size; + uint32_t pad; + uint32_t tx_cr_rate_limit; + uint32_t tx_er_rate_limit; + /* from LSB: coupled:1 */ + uint8_t coupled; +}; + struct dpni_cmd_set_max_frame_length { uint16_t max_frame_length; }; @@ -592,6 +610,45 @@ struct dpni_cmd_clear_fs_entries { uint8_t tc_id; }; +#define DPNI_MODE_SHIFT 0 +#define DPNI_MODE_SIZE 4 +#define DPNI_COLOR_SHIFT 4 +#define DPNI_COLOR_SIZE 4 +#define DPNI_UNITS_SHIFT 0 +#define DPNI_UNITS_SIZE 4 + +struct dpni_cmd_set_rx_tc_policing { + /* from LSB: mode:4 color:4 */ + uint8_t mode_color; + /* from LSB: units: 4 */ + uint8_t units; + uint8_t tc_id; + uint8_t pad; + uint32_t options; + uint32_t cir; + uint32_t cbs; + uint32_t eir; + uint32_t ebs; +}; + +struct dpni_cmd_get_rx_tc_policing { + uint16_t pad; + uint8_t tc_id; +}; + +struct dpni_rsp_get_rx_tc_policing { + /* from LSB: mode:4 color:4 */ + uint8_t mode_color; + /* from LSB: units: 4 */ + uint8_t units; + uint16_t pad; + uint32_t options; + uint32_t cir; + uint32_t cbs; + uint32_t eir; + uint32_t ebs; +}; + #define DPNI_DROP_ENABLE_SHIFT 0 #define DPNI_DROP_ENABLE_SIZE 1 #define DPNI_DROP_UNITS_SHIFT 2 diff --git a/drivers/net/dpaa2/meson.build b/drivers/net/dpaa2/meson.build index 844dd25159..f5f411d592 100644 --- a/drivers/net/dpaa2/meson.build +++ b/drivers/net/dpaa2/meson.build @@ -1,5 +1,5 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright 2018 NXP +# Copyright 2018-2021 NXP if not is_linux build = false @@ -8,6 +8,7 @@ endif deps += ['mempool_dpaa2'] sources = files('base/dpaa2_hw_dpni.c', + 'dpaa2_tm.c', 'dpaa2_mux.c', 'dpaa2_ethdev.c', 'dpaa2_flow.c', From patchwork Wed Jan 20 14:27:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 367044 Delivered-To: patch@linaro.org Received: by 2002:a17:906:24d5:0:0:0:0 with SMTP id f21csp227820ejb; Wed, 20 Jan 2021 06:39:12 -0800 (PST) X-Google-Smtp-Source: ABdhPJzg/AvdlSeyXPvIBd849X3XNw+0Ua0wXeQuq+JD+v1Cd4ctYC5rUZ2KmpuI5fCke/kaQqE2 X-Received: by 2002:a17:906:c5b:: with SMTP id t27mr6467556ejf.129.1611153552204; Wed, 20 Jan 2021 06:39:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611153552; cv=none; d=google.com; s=arc-20160816; b=u7iVtnUHl1QfY7DOq4CEXHNOEej5z6WU9vWT4CnXIfn963Z0ofEs1yxDyZPKydC9OI W7jaqNAVTleLMN5qH/3aeQyrPW1IjjyCfzEucrXoZaMG88WRmA2LhYiD9mDd4CAhr/kd LR8aR056bHqBCLUj5LG+pQApwAWBmsW3GFVBN/GNphqKBabZJMTm43sPs5CddHFzEueZ +lYZbB2DsoZgNifWT36YREWrjZF5Wd0LC2u11d386Fe9MvG/BnuyMKfqM8aekwbGIDfJ 0ijrXOu3WY1rOnL46Da1CEeXQABwXtI5qft3K+XRM+oMCnHreoMnxBJMq2i+KcjKQ5GW uM4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:to:from; bh=Hn0XZ7G3GvotXViccZHZPeCHb0I8nP+jBPq15QeTkjA=; b=aJS79esLLfWao0A2JGy6pAkSCxp1lrgtbt0sr98GbKm08NICQJ779+fho9d08fS6SW hMDOzcBXDlEroOr+UDkNRtfWCbVPQVlgJJbimUuLOioem9kTcL+hMeCCxQQcCHfmuemH UuZ1Gily4dJrYmPrifZO/czFWmQv6iDZhf6wR85FnyFCJsodysypS2/usOSmNN6wiVaM OVK4Jrf9xB/uAuAHP17ymTaME0S9Hx8tqV+mQqt2G/NrkG69zqZAVRcFlHX6jsQlB835 V8SuVnlUOxFe2zVkfVaanBWk9M9yTx37pINa4lQ1z3RpOyrpOPrxPkta7umkMpmzq2Te xXgw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from mails.dpdk.org (mails.dpdk.org. [217.70.189.124]) by mx.google.com with ESMTP id q10si926222edn.332.2021.01.20.06.39.11; Wed, 20 Jan 2021 06:39:12 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) client-ip=217.70.189.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 217.70.189.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7DF1C140DDC; Wed, 20 Jan 2021 15:38:28 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id CCC0B140D95 for ; Wed, 20 Jan 2021 15:38:21 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id B20E21A14AF; Wed, 20 Jan 2021 15:38:21 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 6FAAE1A1499; Wed, 20 Jan 2021 15:38:20 +0100 (CET) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id C27744030E; Wed, 20 Jan 2021 15:38:18 +0100 (CET) From: Hemant Agrawal To: dev@dpdk.org, ferruh.yigit@intel.com Date: Wed, 20 Jan 2021 19:57:22 +0530 Message-Id: <20210120142723.14090-8-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210120142723.14090-1-hemant.agrawal@nxp.com> References: <20210120142723.14090-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 7/7] net/dpaa2: add support to configure dpdmux max Rx frame len X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduce a new pmd api, which can help the applications to configure the max framelen for a given dpdmux interface Signed-off-by: Hemant Agrawal --- drivers/net/dpaa2/dpaa2_mux.c | 28 +++++++++++++++++++++++++++- drivers/net/dpaa2/rte_pmd_dpaa2.h | 18 +++++++++++++++++- drivers/net/dpaa2/version.map | 1 + 3 files changed, 45 insertions(+), 2 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c index f8366e839e..b397d333d6 100644 --- a/drivers/net/dpaa2/dpaa2_mux.c +++ b/drivers/net/dpaa2/dpaa2_mux.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright 2018-2020 NXP + * Copyright 2018-2021 NXP */ #include @@ -205,6 +205,32 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id, return NULL; } +int +rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len) +{ + struct dpaa2_dpdmux_dev *dpdmux_dev; + int ret; + + /* Find the DPDMUX from dpdmux_id in our list */ + dpdmux_dev = get_dpdmux_from_id(dpdmux_id); + if (!dpdmux_dev) { + DPAA2_PMD_ERR("Invalid dpdmux_id: %d", dpdmux_id); + return -1; + } + + ret = dpdmux_set_max_frame_length(&dpdmux_dev->dpdmux, + CMD_PRI_LOW, dpdmux_dev->token, max_rx_frame_len); + if (ret) { + DPAA2_PMD_ERR("DPDMUX:Unable to set mtu. check config %d", ret); + return ret; + } + + DPAA2_PMD_INFO("dpdmux mtu set as %u", + DPAA2_MAX_RX_PKT_LEN - RTE_ETHER_CRC_LEN); + + return ret; +} + static int dpaa2_create_dpdmux_device(int vdev_fd __rte_unused, struct vfio_device_info *obj_info __rte_unused, diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2.h b/drivers/net/dpaa2/rte_pmd_dpaa2.h index ec8df75af9..7204a8f951 100644 --- a/drivers/net/dpaa2/rte_pmd_dpaa2.h +++ b/drivers/net/dpaa2/rte_pmd_dpaa2.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright 2018-2020 NXP + * Copyright 2018-2021 NXP */ #ifndef _RTE_PMD_DPAA2_H @@ -40,6 +40,22 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id, struct rte_flow_item *pattern[], struct rte_flow_action *actions[]); +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * demultiplex interface max rx frame length configure + * + * @param dpdmux_id + * ID of the DPDMUX MC object. + * @param max_rx_frame_len + * maximum receive frame length (will be checked to be minimux of all dpnis) + * + */ +__rte_experimental +int +rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len); + /** * @warning * @b EXPERIMENTAL: this API may change, or be removed, without prior notice diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map index 72d1b2b1c8..3c087c2423 100644 --- a/drivers/net/dpaa2/version.map +++ b/drivers/net/dpaa2/version.map @@ -2,6 +2,7 @@ EXPERIMENTAL { global: rte_pmd_dpaa2_mux_flow_create; + rte_pmd_dpaa2_mux_rx_frame_len; rte_pmd_dpaa2_set_custom_hash; };