From patchwork Fri Oct 23 04:43:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 318907 Delivered-To: patch@linaro.org Received: by 2002:a92:d1d1:0:0:0:0:0 with SMTP id u17csp125657ilg; Thu, 22 Oct 2020 21:44:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzP8rcodHwFK6GkWv/GA53tWUGzNj2rBwjOHs4f9lINsLkbmQC1Vm7lX7WbFNVqW9WPp+XP X-Received: by 2002:a17:906:7a51:: with SMTP id i17mr259287ejo.234.1603428265868; Thu, 22 Oct 2020 21:44:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603428265; cv=none; d=google.com; s=arc-20160816; b=YoV6jAhJ0XLvAazR/BRlrbeTbMepEbbD1hoKIP97gTmbP6+DUjh6NQ8+Up8W3iCmC5 CVW6UoD7pf1EmPgEQQYgyu4SPajtkUjYnT3NF87qEkMY5hf3LoKw22RItkQqxu3yjWof 8qmgW4XQ+6lQ9K4J7pG10ZFy8wnMM58RzCwhAx4Jbdc19wBODjGS6a9egUwyech8OpIW SURyMIgeP3OuPz5H+o80CABS6y1PkqMFLMt8z2byuUS09F2+/FOH2A5cpjUSgAgqXEQq 7eLWr03H22xHW6uj4GbDZHX8vIxgyTOVdzBecmLRyLzaTVUV0Ou0oeq8TVYpQyVcyPW0 an3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=kM2HE2l9gP7K26DlIhc5pe/23R1BHP5JKEQ+96nII8k=; b=IK7mTnhzOvgJskFjfSFomqeydbJ7zf9DbbMhdNagKbsOsktI/m/XuXWyImwD3K5YCP Av9resuHXYt7JisRsCiqrIPSnDQWtC6zlZXuy6EEc9D1ngSNXV4O1w22DGxIss9iRPhP RlSK1oWynlD/TWiC7gupaqEef4RkoB02kGl14bOHCTpvpPfpcJJAkjAt9S9AclCS2LXv GX96y+3p7hrr/3GS7nc42P1ZD1oPb7Y6zvyYR7O360kQJEghkIBE46M6P4tYcFPsv1lt tD+gR+shxyrRywYqzhaETM7U5MIP76H+3Cs5IeG7Bbi4GMUW+t2iqP3JuNd2noTfyH7l u9Tw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id le9si152403ejb.208.2020.10.22.21.44.25; Thu, 22 Oct 2020 21:44:25 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9466B6968; Fri, 23 Oct 2020 06:44:13 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 91F446968; Fri, 23 Oct 2020 06:44:10 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 176E0101E; Thu, 22 Oct 2020 21:44:10 -0700 (PDT) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.12.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0FC183F66B; Thu, 22 Oct 2020 21:44:10 -0700 (PDT) From: Honnappa Nagarahalli To: dev@dpdk.org, honnappa.nagarahalli@arm.com, konstantin.ananyev@intel.com Cc: olivier.matz@6wind.com, david.marchand@redhat.com, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, nd@arm.com, stable@dpdk.org Date: Thu, 22 Oct 2020 23:43:39 -0500 Message-Id: <20201023044343.13462-2-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201023044343.13462-1-honnappa.nagarahalli@arm.com> References: <20200224203931.21256-1-honnappa.nagarahalli@arm.com> <20201023044343.13462-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v3 1/5] test/ring: fix the memory dump size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Pass the correct number of bytes to dump the memory. Fixes: bf28df24e915 ("test/ring: add contention stress test" Cc: konstantin.ananyev@intel.com Cc: stable@dpdk.org Signed-off-by: Honnappa Nagarahalli Reviewed-by: Dharmik Thakkar --- app/test/test_ring_stress_impl.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.17.1 Acked-by: Konstantin Ananyev diff --git a/app/test/test_ring_stress_impl.h b/app/test/test_ring_stress_impl.h index 3b9a480eb..f9ca63b90 100644 --- a/app/test/test_ring_stress_impl.h +++ b/app/test/test_ring_stress_impl.h @@ -159,7 +159,7 @@ check_updt_elem(struct ring_elem *elm[], uint32_t num, "offending object: %p\n", __func__, rte_lcore_id(), num, i, elm[i]); rte_memdump(stdout, "expected", check, sizeof(*check)); - rte_memdump(stdout, "result", elm[i], sizeof(elm[i])); + rte_memdump(stdout, "result", elm[i], sizeof(*elm[i])); rte_spinlock_unlock(&dump_lock); return -EINVAL; } From patchwork Fri Oct 23 04:43:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 318908 Delivered-To: patch@linaro.org Received: by 2002:a92:d1d1:0:0:0:0:0 with SMTP id u17csp125811ilg; Thu, 22 Oct 2020 21:44:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyWmX364TuMNaQ2M7dP+xncGNF3ag2KStNL95Hst6oFN7anUEL1I56LsQ8+iBVq7WewvjoU X-Received: by 2002:a17:907:20b2:: with SMTP id pw18mr229631ejb.159.1603428287139; Thu, 22 Oct 2020 21:44:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603428287; cv=none; d=google.com; s=arc-20160816; b=miGCNCW7oWjhxE7FkHFsrg4xQz0CYGlO35ZL8SXSruNTBFDJusni2Lx+L2gQa2SYmA ioKwSUagOoTCwjGt+VFYYvrpQWOw9rq/zPLYGD2eswZPG3K4xfxv5Hd2N6gLYC0/whpW S8cydknNbjquGbxDUl+HlW7A8tISYZPh73RKxzPJnSG0xBLDhLJeieMqorYxTHnH29Os gCdssGZv21hzvvFiAcm5JGwxK06h8csT+IuuZ17IQF95LZRDowQtoBeN9NSjiqPU/Pak tLFmnmhlYJTHCVG2V53cI1HAhPWvWsXaGLormuztmI0eknXxtgeU8nNDjF6IEOr2K5F8 TspA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=bPIWaSStWqUMe91oQzE4e04AI5z5l+/gDejCdBvB4So=; b=NCPeZ0vg6p/5aeXKV6lxQAqB60Z4++pVW5fyDdQwNVTdYimNbgs65Ea52aSAL/+pzh cyxmxJhhNvM6DFxeKU5EDBtZNhmGfglSvYGM3aShvdAaq2OTvpExfq1iQ2T/TmkblzQ9 9WHzL6Wt60xOELUXfU1IQv4bOiy3vE7VhB4Mq/Y6YiFTKkbmsZdu+woH9cKlFqD1rbHm 1IDbalxtINwptR6HIjODshzTwAxoD25yGqExWlvzoj8Ttw0ZiipXPjN2HpXd6WAZrkzP msQ+oLdTkzdoPrqRQQNrguL6C4mFUjoMJHbkHC0eI5dQVqYk1lvDh9uIFYtg9f/YWrfC CpNA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id lr11si144379ejb.116.2020.10.22.21.44.46; Thu, 22 Oct 2020 21:44:47 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 52B406C97; Fri, 23 Oct 2020 06:44:22 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 8DCFB6CAB for ; Fri, 23 Oct 2020 06:44:20 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0D3A5101E; Thu, 22 Oct 2020 21:44:19 -0700 (PDT) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.12.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F278F3F66B; Thu, 22 Oct 2020 21:44:18 -0700 (PDT) From: Honnappa Nagarahalli To: dev@dpdk.org, honnappa.nagarahalli@arm.com, konstantin.ananyev@intel.com Cc: olivier.matz@6wind.com, david.marchand@redhat.com, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, nd@arm.com Date: Thu, 22 Oct 2020 23:43:40 -0500 Message-Id: <20201023044343.13462-3-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201023044343.13462-1-honnappa.nagarahalli@arm.com> References: <20200224203931.21256-1-honnappa.nagarahalli@arm.com> <20201023044343.13462-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v3 2/5] lib/ring: add zero copy APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add zero-copy APIs. These APIs provide the capability to copy the data to/from the ring memory directly, without having a temporary copy (for ex: an array of mbufs on the stack). Use cases that involve copying large amount of data to/from the ring can benefit from these APIs. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Dharmik Thakkar --- lib/librte_ring/meson.build | 1 + lib/librte_ring/rte_ring_elem.h | 1 + lib/librte_ring/rte_ring_peek_zc.h | 542 +++++++++++++++++++++++++++++ 3 files changed, 544 insertions(+) create mode 100644 lib/librte_ring/rte_ring_peek_zc.h -- 2.17.1 diff --git a/lib/librte_ring/meson.build b/lib/librte_ring/meson.build index 31c0b4649..36fdcb6a5 100644 --- a/lib/librte_ring/meson.build +++ b/lib/librte_ring/meson.build @@ -11,5 +11,6 @@ headers = files('rte_ring.h', 'rte_ring_hts_c11_mem.h', 'rte_ring_peek.h', 'rte_ring_peek_c11_mem.h', + 'rte_ring_peek_zc.h', 'rte_ring_rts.h', 'rte_ring_rts_c11_mem.h') diff --git a/lib/librte_ring/rte_ring_elem.h b/lib/librte_ring/rte_ring_elem.h index 938b398fc..7034d29c0 100644 --- a/lib/librte_ring/rte_ring_elem.h +++ b/lib/librte_ring/rte_ring_elem.h @@ -1079,6 +1079,7 @@ rte_ring_dequeue_burst_elem(struct rte_ring *r, void *obj_table, #ifdef ALLOW_EXPERIMENTAL_API #include +#include #endif #include diff --git a/lib/librte_ring/rte_ring_peek_zc.h b/lib/librte_ring/rte_ring_peek_zc.h new file mode 100644 index 000000000..9db2d343f --- /dev/null +++ b/lib/librte_ring/rte_ring_peek_zc.h @@ -0,0 +1,542 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright (c) 2020 Arm Limited + * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org + * All rights reserved. + * Derived from FreeBSD's bufring.h + * Used as BSD-3 Licensed with permission from Kip Macy. + */ + +#ifndef _RTE_RING_PEEK_ZC_H_ +#define _RTE_RING_PEEK_ZC_H_ + +/** + * @file + * @b EXPERIMENTAL: this API may change without prior notice + * It is not recommended to include this file directly. + * Please include instead. + * + * Ring Peek Zero Copy APIs + * These APIs make it possible to split public enqueue/dequeue API + * into 3 parts: + * - enqueue/dequeue start + * - copy data to/from the ring + * - enqueue/dequeue finish + * Along with the advantages of the peek APIs, these APIs provide the ability + * to avoid copying of the data to temporary area (for ex: array of mbufs + * on the stack). + * + * Note that currently these APIs are available only for two sync modes: + * 1) Single Producer/Single Consumer (RTE_RING_SYNC_ST) + * 2) Serialized Producer/Serialized Consumer (RTE_RING_SYNC_MT_HTS). + * It is user's responsibility to create/init ring with appropriate sync + * modes selected. + * + * Following are some examples showing the API usage. + * 1) + * struct elem_obj {uint64_t a; uint32_t b, c;}; + * struct elem_obj *obj; + * + * // Create ring with sync type RTE_RING_SYNC_ST or RTE_RING_SYNC_MT_HTS + * // Reserve space on the ring + * n = rte_ring_enqueue_zc_bulk_elem_start(r, sizeof(elem_obj), 1, &zcd, NULL); + * + * // Produce the data directly on the ring memory + * obj = (struct elem_obj *)zcd->ptr1; + * obj.a = rte_get_a(); + * obj.b = rte_get_b(); + * obj.c = rte_get_c(); + * rte_ring_enqueue_zc_elem_finish(ring, n); + * + * 2) + * // Create ring with sync type RTE_RING_SYNC_ST or RTE_RING_SYNC_MT_HTS + * // Reserve space on the ring + * n = rte_ring_enqueue_zc_burst_start(r, 32, &zcd, NULL); + * + * // Pkt I/O core polls packets from the NIC + * if (n == 32) + * nb_rx = rte_eth_rx_burst(portid, queueid, zcd->ptr1, 32); + * else + * nb_rx = rte_eth_rx_burst(portid, queueid, zcd->ptr1, zcd->n1); + * + * // Provide packets to the packet processing cores + * rte_ring_enqueue_zc_finish(r, nb_rx); + * + * Note that between _start_ and _finish_ none other thread can proceed + * with enqueue/dequeue operation till _finish_ completes. + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +/** + * Ring zero-copy information structure. + * + * This structure contains the pointers and length of the space + * reserved on the ring storage. + */ +struct rte_ring_zc_data { + /* Pointer to the first space in the ring */ + void **ptr1; + /* Pointer to the second space in the ring if there is wrap-around */ + void **ptr2; + /* Number of elements in the first pointer. If this is equal to + * the number of elements requested, then ptr2 is NULL. + * Otherwise, subtracting n1 from number of elements requested + * will give the number of elements available at ptr2. + */ + unsigned int n1; +} __rte_cache_aligned; + +static __rte_always_inline void +__rte_ring_get_elem_addr(struct rte_ring *r, uint32_t head, + uint32_t esize, uint32_t num, void **dst1, uint32_t *n1, void **dst2) +{ + uint32_t idx, scale, nr_idx; + uint32_t *ring = (uint32_t *)&r[1]; + + /* Normalize to uint32_t */ + scale = esize / sizeof(uint32_t); + idx = head & r->mask; + nr_idx = idx * scale; + + *dst1 = ring + nr_idx; + *n1 = num; + + if (idx + num > r->size) { + *n1 = r->size - idx; + *dst2 = ring; + } +} + +/** + * @internal This function moves prod head value. + */ +static __rte_always_inline unsigned int +__rte_ring_do_enqueue_zc_elem_start(struct rte_ring *r, unsigned int esize, + uint32_t n, enum rte_ring_queue_behavior behavior, + struct rte_ring_zc_data *zcd, unsigned int *free_space) +{ + uint32_t free, head, next; + + switch (r->prod.sync_type) { + case RTE_RING_SYNC_ST: + n = __rte_ring_move_prod_head(r, RTE_RING_SYNC_ST, n, + behavior, &head, &next, &free); + __rte_ring_get_elem_addr(r, head, esize, n, (void **)&zcd->ptr1, + &zcd->n1, (void **)&zcd->ptr2); + break; + case RTE_RING_SYNC_MT_HTS: + n = __rte_ring_hts_move_prod_head(r, n, behavior, &head, &free); + __rte_ring_get_elem_addr(r, head, esize, n, (void **)&zcd->ptr1, + &zcd->n1, (void **)&zcd->ptr2); + break; + case RTE_RING_SYNC_MT: + case RTE_RING_SYNC_MT_RTS: + default: + /* unsupported mode, shouldn't be here */ + RTE_ASSERT(0); + n = 0; + free = 0; + } + + if (free_space != NULL) + *free_space = free - n; + return n; +} + +/** + * Start to enqueue several objects on the ring. + * Note that no actual objects are put in the queue by this function, + * it just reserves space for the user on the ring. + * User has to copy objects into the queue using the returned pointers. + * User should call rte_ring_enqueue_zc_elem_finish to complete the + * enqueue operation. + * + * @param r + * A pointer to the ring structure. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * @param n + * The number of objects to add in the ring. + * @param zcd + * Structure containing the pointers and length of the space + * reserved on the ring storage. + * @param free_space + * If non-NULL, returns the amount of space in the ring after the + * reservation operation has finished. + * @return + * The number of objects that can be enqueued, either 0 or n + */ +__rte_experimental +static __rte_always_inline unsigned int +rte_ring_enqueue_zc_bulk_elem_start(struct rte_ring *r, unsigned int esize, + unsigned int n, struct rte_ring_zc_data *zcd, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_zc_elem_start(r, esize, n, + RTE_RING_QUEUE_FIXED, zcd, free_space); +} + +/** + * Start to enqueue several pointers to objects on the ring. + * Note that no actual pointers are put in the queue by this function, + * it just reserves space for the user on the ring. + * User has to copy pointers to objects into the queue using the + * returned pointers. + * User should call rte_ring_enqueue_zc_finish to complete the + * enqueue operation. + * + * @param r + * A pointer to the ring structure. + * @param n + * The number of objects to add in the ring. + * @param zcd + * Structure containing the pointers and length of the space + * reserved on the ring storage. + * @param free_space + * If non-NULL, returns the amount of space in the ring after the + * reservation operation has finished. + * @return + * The number of objects that can be enqueued, either 0 or n + */ +__rte_experimental +static __rte_always_inline unsigned int +rte_ring_enqueue_zc_bulk_start(struct rte_ring *r, unsigned int n, + struct rte_ring_zc_data *zcd, unsigned int *free_space) +{ + return rte_ring_enqueue_zc_bulk_elem_start(r, sizeof(uintptr_t), n, + zcd, free_space); +} +/** + * Start to enqueue several objects on the ring. + * Note that no actual objects are put in the queue by this function, + * it just reserves space for the user on the ring. + * User has to copy objects into the queue using the returned pointers. + * User should call rte_ring_enqueue_zc_elem_finish to complete the + * enqueue operation. + * + * @param r + * A pointer to the ring structure. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * @param n + * The number of objects to add in the ring. + * @param zcd + * Structure containing the pointers and length of the space + * reserved on the ring storage. + * @param free_space + * If non-NULL, returns the amount of space in the ring after the + * reservation operation has finished. + * @return + * The number of objects that can be enqueued, either 0 or n + */ +__rte_experimental +static __rte_always_inline unsigned int +rte_ring_enqueue_zc_burst_elem_start(struct rte_ring *r, unsigned int esize, + unsigned int n, struct rte_ring_zc_data *zcd, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_zc_elem_start(r, esize, n, + RTE_RING_QUEUE_VARIABLE, zcd, free_space); +} + +/** + * Start to enqueue several pointers to objects on the ring. + * Note that no actual pointers are put in the queue by this function, + * it just reserves space for the user on the ring. + * User has to copy pointers to objects into the queue using the + * returned pointers. + * User should call rte_ring_enqueue_zc_finish to complete the + * enqueue operation. + * + * @param r + * A pointer to the ring structure. + * @param n + * The number of objects to add in the ring. + * @param zcd + * Structure containing the pointers and length of the space + * reserved on the ring storage. + * @param free_space + * If non-NULL, returns the amount of space in the ring after the + * reservation operation has finished. + * @return + * The number of objects that can be enqueued, either 0 or n + */ +__rte_experimental +static __rte_always_inline unsigned int +rte_ring_enqueue_zc_burst_start(struct rte_ring *r, unsigned int n, + struct rte_ring_zc_data *zcd, unsigned int *free_space) +{ + return rte_ring_enqueue_zc_burst_elem_start(r, sizeof(uintptr_t), n, + zcd, free_space); +} + +/** + * Complete enqueuing several objects on the ring. + * Note that number of objects to enqueue should not exceed previous + * enqueue_start return value. + * + * @param r + * A pointer to the ring structure. + * @param n + * The number of objects to add to the ring. + */ +__rte_experimental +static __rte_always_inline void +rte_ring_enqueue_zc_elem_finish(struct rte_ring *r, unsigned int n) +{ + uint32_t tail; + + switch (r->prod.sync_type) { + case RTE_RING_SYNC_ST: + n = __rte_ring_st_get_tail(&r->prod, &tail, n); + __rte_ring_st_set_head_tail(&r->prod, tail, n, 1); + break; + case RTE_RING_SYNC_MT_HTS: + n = __rte_ring_hts_get_tail(&r->hts_prod, &tail, n); + __rte_ring_hts_set_head_tail(&r->hts_prod, tail, n, 1); + break; + case RTE_RING_SYNC_MT: + case RTE_RING_SYNC_MT_RTS: + default: + /* unsupported mode, shouldn't be here */ + RTE_ASSERT(0); + } +} + +/** + * Complete enqueuing several pointers to objects on the ring. + * Note that number of objects to enqueue should not exceed previous + * enqueue_start return value. + * + * @param r + * A pointer to the ring structure. + * @param n + * The number of pointers to objects to add to the ring. + */ +__rte_experimental +static __rte_always_inline void +rte_ring_enqueue_zc_finish(struct rte_ring *r, unsigned int n) +{ + rte_ring_enqueue_zc_elem_finish(r, n); +} + +/** + * @internal This function moves cons head value and copies up to *n* + * objects from the ring to the user provided obj_table. + */ +static __rte_always_inline unsigned int +__rte_ring_do_dequeue_zc_elem_start(struct rte_ring *r, + uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior, + struct rte_ring_zc_data *zcd, unsigned int *available) +{ + uint32_t avail, head, next; + + switch (r->cons.sync_type) { + case RTE_RING_SYNC_ST: + n = __rte_ring_move_cons_head(r, RTE_RING_SYNC_ST, n, + behavior, &head, &next, &avail); + __rte_ring_get_elem_addr(r, head, esize, n, (void **)&zcd->ptr1, + &zcd->n1, (void **)&zcd->ptr2); + break; + case RTE_RING_SYNC_MT_HTS: + n = __rte_ring_hts_move_cons_head(r, n, behavior, + &head, &avail); + __rte_ring_get_elem_addr(r, head, esize, n, (void **)&zcd->ptr1, + &zcd->n1, (void **)&zcd->ptr2); + break; + case RTE_RING_SYNC_MT: + case RTE_RING_SYNC_MT_RTS: + default: + /* unsupported mode, shouldn't be here */ + RTE_ASSERT(0); + n = 0; + avail = 0; + } + + if (available != NULL) + *available = avail - n; + return n; +} + +/** + * Start to dequeue several objects from the ring. + * Note that no actual objects are copied from the queue by this function. + * User has to copy objects from the queue using the returned pointers. + * User should call rte_ring_dequeue_zc_elem_finish to complete the + * dequeue operation. + * + * @param r + * A pointer to the ring structure. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * @param n + * The number of objects to remove from the ring. + * @param zcd + * Structure containing the pointers and length of the space + * reserved on the ring storage. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * The number of objects that can be dequeued, either 0 or n + */ +__rte_experimental +static __rte_always_inline unsigned int +rte_ring_dequeue_zc_bulk_elem_start(struct rte_ring *r, unsigned int esize, + unsigned int n, struct rte_ring_zc_data *zcd, unsigned int *available) +{ + return __rte_ring_do_dequeue_zc_elem_start(r, esize, n, + RTE_RING_QUEUE_FIXED, zcd, available); +} + +/** + * Start to dequeue several pointers to objects from the ring. + * Note that no actual pointers are removed from the queue by this function. + * User has to copy pointers to objects from the queue using the + * returned pointers. + * User should call rte_ring_dequeue_zc_finish to complete the + * dequeue operation. + * + * @param r + * A pointer to the ring structure. + * @param n + * The number of objects to remove from the ring. + * @param zcd + * Structure containing the pointers and length of the space + * reserved on the ring storage. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * The number of objects that can be dequeued, either 0 or n + */ +__rte_experimental +static __rte_always_inline unsigned int +rte_ring_dequeue_zc_bulk_start(struct rte_ring *r, unsigned int n, + struct rte_ring_zc_data *zcd, unsigned int *available) +{ + return rte_ring_dequeue_zc_bulk_elem_start(r, sizeof(uintptr_t), + n, zcd, available); +} + +/** + * Start to dequeue several objects from the ring. + * Note that no actual objects are copied from the queue by this function. + * User has to copy objects from the queue using the returned pointers. + * User should call rte_ring_dequeue_zc_elem_finish to complete the + * dequeue operation. + * + * @param r + * A pointer to the ring structure. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring. + * @param zcd + * Structure containing the pointers and length of the space + * reserved on the ring storage. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * The number of objects that can be dequeued, either 0 or n + */ +__rte_experimental +static __rte_always_inline unsigned int +rte_ring_dequeue_zc_burst_elem_start(struct rte_ring *r, unsigned int esize, + unsigned int n, struct rte_ring_zc_data *zcd, unsigned int *available) +{ + return __rte_ring_do_dequeue_zc_elem_start(r, esize, n, + RTE_RING_QUEUE_VARIABLE, zcd, available); +} + +/** + * Start to dequeue several pointers to objects from the ring. + * Note that no actual pointers are removed from the queue by this function. + * User has to copy pointers to objects from the queue using the + * returned pointers. + * User should call rte_ring_dequeue_zc_finish to complete the + * dequeue operation. + * + * @param r + * A pointer to the ring structure. + * @param n + * The number of objects to remove from the ring. + * @param zcd + * Structure containing the pointers and length of the space + * reserved on the ring storage. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * The number of objects that can be dequeued, either 0 or n + */ +__rte_experimental +static __rte_always_inline unsigned int +rte_ring_dequeue_zc_burst_start(struct rte_ring *r, unsigned int n, + struct rte_ring_zc_data *zcd, unsigned int *available) +{ + return rte_ring_dequeue_zc_burst_elem_start(r, sizeof(uintptr_t), n, + zcd, available); +} + +/** + * Complete dequeuing several objects from the ring. + * Note that number of objects to dequeued should not exceed previous + * dequeue_start return value. + * + * @param r + * A pointer to the ring structure. + * @param n + * The number of objects to remove from the ring. + */ +__rte_experimental +static __rte_always_inline void +rte_ring_dequeue_zc_elem_finish(struct rte_ring *r, unsigned int n) +{ + uint32_t tail; + + switch (r->cons.sync_type) { + case RTE_RING_SYNC_ST: + n = __rte_ring_st_get_tail(&r->cons, &tail, n); + __rte_ring_st_set_head_tail(&r->cons, tail, n, 0); + break; + case RTE_RING_SYNC_MT_HTS: + n = __rte_ring_hts_get_tail(&r->hts_cons, &tail, n); + __rte_ring_hts_set_head_tail(&r->hts_cons, tail, n, 0); + break; + case RTE_RING_SYNC_MT: + case RTE_RING_SYNC_MT_RTS: + default: + /* unsupported mode, shouldn't be here */ + RTE_ASSERT(0); + } +} + +/** + * Complete dequeuing several objects from the ring. + * Note that number of objects to dequeued should not exceed previous + * dequeue_start return value. + * + * @param r + * A pointer to the ring structure. + * @param n + * The number of objects to remove from the ring. + */ +__rte_experimental +static __rte_always_inline void +rte_ring_dequeue_zc_finish(struct rte_ring *r, unsigned int n) +{ + rte_ring_dequeue_elem_finish(r, n); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_RING_PEEK_ZC_H_ */ From patchwork Fri Oct 23 04:43:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 318909 Delivered-To: patch@linaro.org Received: by 2002:a92:d1d1:0:0:0:0:0 with SMTP id u17csp125968ilg; Thu, 22 Oct 2020 21:45:09 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxFO4XT6yHdQUunqQUM2wjyIeJYKRaNDV3GNKLOLFfpmV2tQhCEeAONKn77nXdPLAadzrvQ X-Received: by 2002:a17:906:8349:: with SMTP id b9mr290643ejy.88.1603428309409; Thu, 22 Oct 2020 21:45:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603428309; cv=none; d=google.com; s=arc-20160816; b=xDj5mDexoMdYZg4Vv9SkOZofyjpnlFWfMHqpyDbqVoyB5v+PO//k/nI7PjVltEOhyM x58uC/35vmwV3BgphikVifD1tfWfbke996QdL/1ypHP+4AbK7ICnmvDlZAkaXrqdDlf3 TtLWED1o7PvGQeMes5lG36ISRorPabb/06Rk8PveJ6ao2rcUnNMXPv/lojnLN3m9ctP+ Ww/OAY5Ow6mW5gBNQgqNVT0argXQWStnuyxjFEyPlXsKdN0e8qC6W6SC+pgGrZNwZacV RhPR+fcluB2l5u8lQhiM0g/rvcWpauotXJP+An8PV7lM2/0D1Rx1e6zBKi+vJOW6G2+j O93g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=a3yZwjRAyrb6v0nFM1C5zkGdYuhkls2zQID9LNTCdYs=; b=NuWoupicAJlEqwRUbftSz6wXPIygZ8H0HL6C2ZvbHb0n3Ws8xFAppw1AJmWOM3oaCX NePHyLVi5ooB46388dQpauoev8EEiEE6zl8qljZV3PLOBUuxLZv/dlFHfBoFfrG1+aZK OIxbm0MTb+Skfa3oyilsx5ih3CMPr3xDn/r7wI1a0CxudY8jJheCJELx+mOvssdlddwQ B0eBmyR3r2ldLPj/T9GP5HqJVEQMdRa/0bmYPyc3fqxcnmfSQ8+X8ZAKrNWB8W6iNefJ trbMlxbjoFuCguTLwY3iGASmjNWktq76RLmxD73muETEtMO8YbuUZaUxYd80lci62xT8 A6xg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id v4si95396edx.148.2020.10.22.21.45.09; Thu, 22 Oct 2020 21:45:09 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A4D1B72D8; Fri, 23 Oct 2020 06:44:25 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 86DFF6CAC for ; Fri, 23 Oct 2020 06:44:22 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0BBA9113E; Thu, 22 Oct 2020 21:44:21 -0700 (PDT) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.12.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EF73F3F66B; Thu, 22 Oct 2020 21:44:20 -0700 (PDT) From: Honnappa Nagarahalli To: dev@dpdk.org, honnappa.nagarahalli@arm.com, konstantin.ananyev@intel.com Cc: olivier.matz@6wind.com, david.marchand@redhat.com, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, nd@arm.com Date: Thu, 22 Oct 2020 23:43:41 -0500 Message-Id: <20201023044343.13462-4-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201023044343.13462-1-honnappa.nagarahalli@arm.com> References: <20200224203931.21256-1-honnappa.nagarahalli@arm.com> <20201023044343.13462-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v3 3/5] test/ring: move common function to header file X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Move test_ring_inc_ptr to header file so that it can be used by functions in other files. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Dharmik Thakkar --- app/test/test_ring.c | 11 ----------- app/test/test_ring.h | 11 +++++++++++ 2 files changed, 11 insertions(+), 11 deletions(-) -- 2.17.1 diff --git a/app/test/test_ring.c b/app/test/test_ring.c index a62cb263b..329d538a9 100644 --- a/app/test/test_ring.c +++ b/app/test/test_ring.c @@ -243,17 +243,6 @@ test_ring_deq_impl(struct rte_ring *r, void **obj, int esize, unsigned int n, NULL); } -static void** -test_ring_inc_ptr(void **obj, int esize, unsigned int n) -{ - /* Legacy queue APIs? */ - if ((esize) == -1) - return ((void **)obj) + n; - else - return (void **)(((uint32_t *)obj) + - (n * esize / sizeof(uint32_t))); -} - static void test_ring_mem_init(void *obj, unsigned int count, int esize) { diff --git a/app/test/test_ring.h b/app/test/test_ring.h index d4b15af7c..16697ee02 100644 --- a/app/test/test_ring.h +++ b/app/test/test_ring.h @@ -42,6 +42,17 @@ test_ring_create(const char *name, int esize, unsigned int count, (socket_id), (flags)); } +static inline void** +test_ring_inc_ptr(void **obj, int esize, unsigned int n) +{ + /* Legacy queue APIs? */ + if ((esize) == -1) + return ((void **)obj) + n; + else + return (void **)(((uint32_t *)obj) + + (n * esize / sizeof(uint32_t))); +} + static __rte_always_inline unsigned int test_ring_enqueue(struct rte_ring *r, void **obj, int esize, unsigned int n, unsigned int api_type) From patchwork Fri Oct 23 04:43:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 318910 Delivered-To: patch@linaro.org Received: by 2002:a92:d1d1:0:0:0:0:0 with SMTP id u17csp126090ilg; Thu, 22 Oct 2020 21:45:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx0InJyzGkn5UzF6wMv0hr8HXNGhRd/H/il8MhguUwFsXCKHPSYMOvOT9lmi+A6fbkjb8p5 X-Received: by 2002:a17:906:bc50:: with SMTP id s16mr226838ejv.275.1603428323535; Thu, 22 Oct 2020 21:45:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603428323; cv=none; d=google.com; s=arc-20160816; b=YZpGee327x2rR0haHySLVYCqkvR7kSHyl1YhqjtsqUgzKbOrUMOF+MKas99SRSGAgj W+aLX7e/louQXC2kMVoqxiNdCkdzPWI9Dy2FYMXpEWeJoZlsILxwZPNL8claet9cAUp8 zERh5ISK+A10hG/GqBQ9MHOvqLibuj9pDlP0pO4bKVdWKUgk28OFzjsTGhioKAOZmiT/ 6wCGuTd7mg5tI5IjZdvRsjUwdy3Ado/+mqzETcxso0kdLH9IE1nCMs1cFGLzT3hGWRdN VgWwyFJInbPQTmJ6/m29fbpBBgBaTJKcbDeaxOiMe3vIVOCSMKZAsnxIF4Fj7dJ25G6l IK6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=KKqBseNvo2ieO6wKiwDms+pQu6fvqMwsflOQTCmyed0=; b=R/bEZAGkeU+04z7iIHXxAZ1yT64tnCS2t+JpEKxt6QFBWFlE2uz0Bf0AxylqlPrvM3 +EFzXQPH8TR3yguNE+kPBxDofKi2HKNg1lAFYWj8IZQ7yWNgkYTC911PwyD0oJbVFZ4c O+Jv+vWKXFJbzLEaLfNI21DbbfrsI133jLKfak1an4/H8Md1sdlZVNqrKA2g+5lfo83g 5UjKY7Gd/YE9ar6NPjlSchfVWX/SNf0t6ciePYdSTafWFiObYbPfBZVlABO2WzRDus87 qQiKv6gNebZFrXyoJlxHD4EIeugtxWV6HvbtZAyv0BdDEAPfTp+BB7rvDXjA24LQGreF B5pg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id mj13si129007ejb.441.2020.10.22.21.45.23; Thu, 22 Oct 2020 21:45:23 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3A08F72DF; Fri, 23 Oct 2020 06:44:27 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 735926CAD for ; Fri, 23 Oct 2020 06:44:23 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F18C8142F; Thu, 22 Oct 2020 21:44:21 -0700 (PDT) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.12.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E66CC3F66B; Thu, 22 Oct 2020 21:44:21 -0700 (PDT) From: Honnappa Nagarahalli To: dev@dpdk.org, honnappa.nagarahalli@arm.com, konstantin.ananyev@intel.com Cc: olivier.matz@6wind.com, david.marchand@redhat.com, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, nd@arm.com Date: Thu, 22 Oct 2020 23:43:42 -0500 Message-Id: <20201023044343.13462-5-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201023044343.13462-1-honnappa.nagarahalli@arm.com> References: <20200224203931.21256-1-honnappa.nagarahalli@arm.com> <20201023044343.13462-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v3 4/5] test/ring: add functional tests for zero copy APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add functional tests for zero copy APIs. Test enqueue/dequeue functions are created using the zero copy APIs to fit into the existing testing method. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Dharmik Thakkar --- app/test/test_ring.c | 196 +++++++++++++++++++++++++++++++++++++++++++ app/test/test_ring.h | 42 ++++++++++ 2 files changed, 238 insertions(+) -- 2.17.1 Acked-by: Konstantin Ananyev diff --git a/app/test/test_ring.c b/app/test/test_ring.c index 329d538a9..99fe4b46f 100644 --- a/app/test/test_ring.c +++ b/app/test/test_ring.c @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2010-2014 Intel Corporation + * Copyright(c) 2020 Arm Limited */ #include @@ -68,6 +69,149 @@ static const int esize[] = {-1, 4, 8, 16, 20}; +/* Wrappers around the zero-copy APIs. The wrappers match + * the normal enqueue/dequeue API declarations. + */ +static unsigned int +test_ring_enqueue_zc_bulk(struct rte_ring *r, void * const *obj_table, + unsigned int n, unsigned int *free_space) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_enqueue_zc_bulk_start(r, n, &zcd, free_space); + if (ret > 0) { + /* Copy the data to the ring */ + test_ring_copy_to(&zcd, obj_table, sizeof(void *), ret); + rte_ring_enqueue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_enqueue_zc_bulk_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_enqueue_zc_bulk_elem_start(r, esize, n, + &zcd, free_space); + if (ret > 0) { + /* Copy the data to the ring */ + test_ring_copy_to(&zcd, obj_table, esize, ret); + rte_ring_enqueue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_enqueue_zc_burst(struct rte_ring *r, void * const *obj_table, + unsigned int n, unsigned int *free_space) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_enqueue_zc_burst_start(r, n, &zcd, free_space); + if (ret > 0) { + /* Copy the data to the ring */ + test_ring_copy_to(&zcd, obj_table, sizeof(void *), ret); + rte_ring_enqueue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_enqueue_zc_burst_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_enqueue_zc_burst_elem_start(r, esize, n, + &zcd, free_space); + if (ret > 0) { + /* Copy the data to the ring */ + test_ring_copy_to(&zcd, obj_table, esize, ret); + rte_ring_enqueue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_dequeue_zc_bulk(struct rte_ring *r, void **obj_table, + unsigned int n, unsigned int *available) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_dequeue_zc_bulk_start(r, n, &zcd, available); + if (ret > 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj_table, sizeof(void *), ret); + rte_ring_dequeue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_dequeue_zc_bulk_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_dequeue_zc_bulk_elem_start(r, esize, n, + &zcd, available); + if (ret > 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj_table, esize, ret); + rte_ring_dequeue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_dequeue_zc_burst(struct rte_ring *r, void **obj_table, + unsigned int n, unsigned int *available) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_dequeue_zc_burst_start(r, n, &zcd, available); + if (ret > 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj_table, sizeof(void *), ret); + rte_ring_dequeue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_dequeue_zc_burst_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_dequeue_zc_burst_elem_start(r, esize, n, + &zcd, available); + if (ret > 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj_table, esize, ret); + rte_ring_dequeue_zc_finish(r, ret); + } + + return ret; +} + static const struct { const char *desc; uint32_t api_type; @@ -219,6 +363,58 @@ static const struct { .felem = rte_ring_dequeue_burst_elem, }, }, + { + .desc = "SP/SC sync mode (ZC)", + .api_type = TEST_RING_ELEM_BULK | TEST_RING_THREAD_SPSC, + .create_flags = RING_F_SP_ENQ | RING_F_SC_DEQ, + .enq = { + .flegacy = test_ring_enqueue_zc_bulk, + .felem = test_ring_enqueue_zc_bulk_elem, + }, + .deq = { + .flegacy = test_ring_dequeue_zc_bulk, + .felem = test_ring_dequeue_zc_bulk_elem, + }, + }, + { + .desc = "MP_HTS/MC_HTS sync mode (ZC)", + .api_type = TEST_RING_ELEM_BULK | TEST_RING_THREAD_DEF, + .create_flags = RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ, + .enq = { + .flegacy = test_ring_enqueue_zc_bulk, + .felem = test_ring_enqueue_zc_bulk_elem, + }, + .deq = { + .flegacy = test_ring_dequeue_zc_bulk, + .felem = test_ring_dequeue_zc_bulk_elem, + }, + }, + { + .desc = "SP/SC sync mode (ZC)", + .api_type = TEST_RING_ELEM_BURST | TEST_RING_THREAD_SPSC, + .create_flags = RING_F_SP_ENQ | RING_F_SC_DEQ, + .enq = { + .flegacy = test_ring_enqueue_zc_burst, + .felem = test_ring_enqueue_zc_burst_elem, + }, + .deq = { + .flegacy = test_ring_dequeue_zc_burst, + .felem = test_ring_dequeue_zc_burst_elem, + }, + }, + { + .desc = "MP_HTS/MC_HTS sync mode (ZC)", + .api_type = TEST_RING_ELEM_BURST | TEST_RING_THREAD_DEF, + .create_flags = RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ, + .enq = { + .flegacy = test_ring_enqueue_zc_burst, + .felem = test_ring_enqueue_zc_burst_elem, + }, + .deq = { + .flegacy = test_ring_dequeue_zc_burst, + .felem = test_ring_dequeue_zc_burst_elem, + }, + } }; static unsigned int diff --git a/app/test/test_ring.h b/app/test/test_ring.h index 16697ee02..33c8a31fe 100644 --- a/app/test/test_ring.h +++ b/app/test/test_ring.h @@ -53,6 +53,48 @@ test_ring_inc_ptr(void **obj, int esize, unsigned int n) (n * esize / sizeof(uint32_t))); } +static inline void +test_ring_mem_copy(void *dst, void * const *src, int esize, unsigned int num) +{ + size_t temp_sz; + + temp_sz = num * sizeof(void *); + if (esize != -1) + temp_sz = esize * num; + + memcpy(dst, src, temp_sz); +} + +/* Copy to the ring memory */ +static inline void +test_ring_copy_to(struct rte_ring_zc_data *zcd, void * const *src, int esize, + unsigned int num) +{ + test_ring_mem_copy(zcd->ptr1, src, esize, zcd->n1); + if (zcd->n1 != num) { + if (esize == -1) + src = src + zcd->n1; + else + src = (void * const *)(((const uint32_t *)src) + + (zcd->n1 * esize / sizeof(uint32_t))); + test_ring_mem_copy(zcd->ptr2, src, + esize, num - zcd->n1); + } +} + +/* Copy from the ring memory */ +static inline void +test_ring_copy_from(struct rte_ring_zc_data *zcd, void *dst, int esize, + unsigned int num) +{ + test_ring_mem_copy(dst, zcd->ptr1, esize, zcd->n1); + + if (zcd->n1 != num) { + dst = test_ring_inc_ptr(dst, esize, zcd->n1); + test_ring_mem_copy(dst, zcd->ptr2, esize, num - zcd->n1); + } +} + static __rte_always_inline unsigned int test_ring_enqueue(struct rte_ring *r, void **obj, int esize, unsigned int n, unsigned int api_type) From patchwork Fri Oct 23 04:43:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 318911 Delivered-To: patch@linaro.org Received: by 2002:a92:d1d1:0:0:0:0:0 with SMTP id u17csp126237ilg; Thu, 22 Oct 2020 21:45:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxQbjCq/OuLVk+y5KfP4aq5gOUn2GoYSEioB+cL9z0GHu6ayn2JRKPeBE2Sn/q5nLsP936V X-Received: by 2002:a17:906:a981:: with SMTP id jr1mr233485ejb.99.1603428344702; Thu, 22 Oct 2020 21:45:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603428344; cv=none; d=google.com; s=arc-20160816; b=ajRLCSZPkk/TK7taXlyLDvp12ZwhRRGIhx0mqABdSiFWxn5603qthiep/ULTET8GlR RsBCGo2iDupLhlhAA/irAbeFoUYuVri+AvMjp0Xpafx4dR7KC6iMAXN+T10qFaf01t/z Sdggpu9tFIo/UpYnVOPtVn+9/w6CWzBudIRDpRb1B60+RsPefmWXQcrHqYWc5q2eqZAw lJ3QlCAi1AFxUfPg2HD3vsznOj2pOM4UddI6xhcjlCRAuTrZgY6LzQBLwIuZVonWwFFD o7ssmXyWt8wNfYNsw7jcum3YaMr4jf605+129yzyIRTXkXCr6hj+KiZEKkV5Lzh15WrI Dy2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=qPiMx0mqDhNVBfCJDfc/Zk/VWHh0DeYcGKjocipGeT4=; b=UUs08684TYxGjWxmkSCyUXita3OrztpxB+LanNf8Z9DFBikxEDznhCOb1LEerjL9J9 Ad4Noj+JKwvYu3qXyJtYqCZhppkZNzJ/BftCgdV0uuzmSBPlcxAv+watvJIVtSWnN6xQ gIDqI7pdvt0DFqsRhnC5rlCHv7by/3btYx2lb49ZJNBKGH9vQpe0WXlooPI7+U/F/k9m 6/iHlkF4fmAm+kAsE5ZTGTMYGaE7cf6DIXV8wDMDbjvQCqjrQ+0XyZnCEH5y03bFkhkK tXR5edFgFXXDtsWk/TlSHx4s2UI2Ck7r7YP5CgDUOrxytC8VXQXgG0U2yI8IlTM6boeq HYzw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id g1si159652ejf.525.2020.10.22.21.45.44; Thu, 22 Oct 2020 21:45:44 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EC18472E4; Fri, 23 Oct 2020 06:44:31 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 8BB7572DD for ; Fri, 23 Oct 2020 06:44:26 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0E7EC101E; Thu, 22 Oct 2020 21:44:25 -0700 (PDT) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.12.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0491B3F66B; Thu, 22 Oct 2020 21:44:25 -0700 (PDT) From: Honnappa Nagarahalli To: dev@dpdk.org, honnappa.nagarahalli@arm.com, konstantin.ananyev@intel.com Cc: olivier.matz@6wind.com, david.marchand@redhat.com, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, nd@arm.com Date: Thu, 22 Oct 2020 23:43:43 -0500 Message-Id: <20201023044343.13462-6-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201023044343.13462-1-honnappa.nagarahalli@arm.com> References: <20200224203931.21256-1-honnappa.nagarahalli@arm.com> <20201023044343.13462-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v3 5/5] test/ring: add stress tests for zero copy APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add stress tests for zero copy API. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Dharmik Thakkar --- app/test/meson.build | 2 + app/test/test_ring_mt_peek_stress_zc.c | 56 ++++++++++++++++++++++ app/test/test_ring_st_peek_stress_zc.c | 65 ++++++++++++++++++++++++++ app/test/test_ring_stress.c | 6 +++ app/test/test_ring_stress.h | 2 + 5 files changed, 131 insertions(+) create mode 100644 app/test/test_ring_mt_peek_stress_zc.c create mode 100644 app/test/test_ring_st_peek_stress_zc.c -- 2.17.1 Acked-by: Konstantin Ananyev diff --git a/app/test/meson.build b/app/test/meson.build index 8bfb02890..88c831a92 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -108,9 +108,11 @@ test_sources = files('commands.c', 'test_ring_mpmc_stress.c', 'test_ring_hts_stress.c', 'test_ring_mt_peek_stress.c', + 'test_ring_mt_peek_stress_zc.c', 'test_ring_perf.c', 'test_ring_rts_stress.c', 'test_ring_st_peek_stress.c', + 'test_ring_st_peek_stress_zc.c', 'test_ring_stress.c', 'test_rwlock.c', 'test_sched.c', diff --git a/app/test/test_ring_mt_peek_stress_zc.c b/app/test/test_ring_mt_peek_stress_zc.c new file mode 100644 index 000000000..7e0bd511a --- /dev/null +++ b/app/test/test_ring_mt_peek_stress_zc.c @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Arm Limited + */ + +#include "test_ring.h" +#include "test_ring_stress_impl.h" +#include + +static inline uint32_t +_st_ring_dequeue_bulk(struct rte_ring *r, void **obj, uint32_t n, + uint32_t *avail) +{ + uint32_t m; + struct rte_ring_zc_data zcd; + + m = rte_ring_dequeue_zc_bulk_start(r, n, &zcd, avail); + n = (m == n) ? n : 0; + if (n != 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj, -1, n); + rte_ring_dequeue_zc_finish(r, n); + } + + return n; +} + +static inline uint32_t +_st_ring_enqueue_bulk(struct rte_ring *r, void * const *obj, uint32_t n, + uint32_t *free) +{ + uint32_t m; + struct rte_ring_zc_data zcd; + + m = rte_ring_enqueue_zc_bulk_start(r, n, &zcd, free); + n = (m == n) ? n : 0; + if (n != 0) { + /* Copy the data from the ring */ + test_ring_copy_to(&zcd, obj, -1, n); + rte_ring_enqueue_zc_finish(r, n); + } + + return n; +} + +static int +_st_ring_init(struct rte_ring *r, const char *name, uint32_t num) +{ + return rte_ring_init(r, name, num, + RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ); +} + +const struct test test_ring_mt_peek_stress_zc = { + .name = "MT_PEEK_ZC", + .nb_case = RTE_DIM(tests), + .cases = tests, +}; diff --git a/app/test/test_ring_st_peek_stress_zc.c b/app/test/test_ring_st_peek_stress_zc.c new file mode 100644 index 000000000..2933e30bf --- /dev/null +++ b/app/test/test_ring_st_peek_stress_zc.c @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Arm Limited + */ + +#include "test_ring.h" +#include "test_ring_stress_impl.h" +#include + +static inline uint32_t +_st_ring_dequeue_bulk(struct rte_ring *r, void **obj, uint32_t n, + uint32_t *avail) +{ + uint32_t m; + struct rte_ring_zc_data zcd; + + static rte_spinlock_t lck = RTE_SPINLOCK_INITIALIZER; + + rte_spinlock_lock(&lck); + + m = rte_ring_dequeue_zc_bulk_start(r, n, &zcd, avail); + n = (m == n) ? n : 0; + if (n != 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj, -1, n); + rte_ring_dequeue_zc_finish(r, n); + } + + rte_spinlock_unlock(&lck); + return n; +} + +static inline uint32_t +_st_ring_enqueue_bulk(struct rte_ring *r, void * const *obj, uint32_t n, + uint32_t *free) +{ + uint32_t m; + struct rte_ring_zc_data zcd; + + static rte_spinlock_t lck = RTE_SPINLOCK_INITIALIZER; + + rte_spinlock_lock(&lck); + + m = rte_ring_enqueue_zc_bulk_start(r, n, &zcd, free); + n = (m == n) ? n : 0; + if (n != 0) { + /* Copy the data from the ring */ + test_ring_copy_to(&zcd, obj, -1, n); + rte_ring_enqueue_zc_finish(r, n); + } + + rte_spinlock_unlock(&lck); + return n; +} + +static int +_st_ring_init(struct rte_ring *r, const char *name, uint32_t num) +{ + return rte_ring_init(r, name, num, RING_F_SP_ENQ | RING_F_SC_DEQ); +} + +const struct test test_ring_st_peek_stress_zc = { + .name = "ST_PEEK_ZC", + .nb_case = RTE_DIM(tests), + .cases = tests, +}; diff --git a/app/test/test_ring_stress.c b/app/test/test_ring_stress.c index c4f82ea56..1af45e0fc 100644 --- a/app/test/test_ring_stress.c +++ b/app/test/test_ring_stress.c @@ -49,9 +49,15 @@ test_ring_stress(void) n += test_ring_mt_peek_stress.nb_case; k += run_test(&test_ring_mt_peek_stress); + n += test_ring_mt_peek_stress_zc.nb_case; + k += run_test(&test_ring_mt_peek_stress_zc); + n += test_ring_st_peek_stress.nb_case; k += run_test(&test_ring_st_peek_stress); + n += test_ring_st_peek_stress_zc.nb_case; + k += run_test(&test_ring_st_peek_stress_zc); + printf("Number of tests:\t%u\nSuccess:\t%u\nFailed:\t%u\n", n, k, n - k); return (k != n); diff --git a/app/test/test_ring_stress.h b/app/test/test_ring_stress.h index c85d6fa92..416d68c9a 100644 --- a/app/test/test_ring_stress.h +++ b/app/test/test_ring_stress.h @@ -36,4 +36,6 @@ extern const struct test test_ring_mpmc_stress; extern const struct test test_ring_rts_stress; extern const struct test test_ring_hts_stress; extern const struct test test_ring_mt_peek_stress; +extern const struct test test_ring_mt_peek_stress_zc; extern const struct test test_ring_st_peek_stress; +extern const struct test test_ring_st_peek_stress_zc;