From patchwork Mon Jan 13 17:25:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 182812 Delivered-To: patch@linaro.org Received: by 2002:ac9:44c4:0:0:0:0:0 with SMTP id t4csp4551168och; Mon, 13 Jan 2020 09:28:30 -0800 (PST) X-Google-Smtp-Source: APXvYqwpsoNHubLXCiS9w5q/7uV5pP3nk8/7vD3wtxl/aSXByzIIkdIYdCHb3tR8l5S6GKEAHw1l X-Received: by 2002:aa7:c611:: with SMTP id h17mr912020edq.155.1578936385999; Mon, 13 Jan 2020 09:26:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1578936385; cv=none; d=google.com; s=arc-20160816; b=oVOSZX1HfTAVvZCtNWvtyqs+80DOtw8EWAwfKJ1MT4WV67uN+Hyd6LI1xh0L0Mvyn2 3WzeYdid0o9z6XufDm+GX2JO0ek7QRkbsWDFjHUuWc6VkjtA0aiUStJ8/OqeflXjxk3r RarInuN457xNQJNVg5ZAzz8YyKj1BY0YbJ5syv+DrZfHTVdHoSXvKz7pxh/4LlmOWQAS Lj62ujJPf30KO3/tANJru5yc1YRu9TzT3i4CekmlCNjXgPwTEC/MIXx/HX61D0ZANvaS aNqFIMoFSvcaFN/pXyNjP1za+51PqOjknO/siPjz04jeNeLziztiaigluyBnWdKBhUYt D+VQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=jVtwypcYNzhD/+MqXWNfFF3kUA/Zw6eCjRKX6TFAoAs=; b=aoWo0tLf/3SXOK3F0XCfEuZSetv07UMIXIBC5tHN12+xmkqJjYCF/2gnOEp4toOMJZ wGZwns/IWOIci0naHpDsFs8J2/jc53gYy0o+PtY2WL2PmOhxSghSveGYXcDeLB2bUHhd fS0fSsrw+GX2xUbOcv0Uybrc/qW947t8FR3rqMUONF+OFFaRlWv/IkPoWs460U1Nu0AE hJff13rShIkMhxIIBk+sFVNGWmF46ua0FoDFVBF52TCRpQFVGQPUUg3Se+H4GOd3sZQQ wxUFH6m25KsoKX9Nf8nVyJryoML2xhbMQ3VbBpOcfsP+daTy8q2dIwy4FVpphXo0eBBU lgWA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id v13si8106715edl.174.2020.01.13.09.26.24; Mon, 13 Jan 2020 09:26:25 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2379F1D37E; Mon, 13 Jan 2020 18:25:50 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 534D61D171 for ; Mon, 13 Jan 2020 18:25:40 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C792C1435; Mon, 13 Jan 2020 09:25:39 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ADE8F3F85E; Mon, 13 Jan 2020 09:25:39 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Mon, 13 Jan 2020 11:25:16 -0600 Message-Id: <20200113172518.37815-5-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200113172518.37815-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20200113172518.37815-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v8 4/6] test/ring: modify perf test cases to use rte_ring_xxx_elem APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adjust the performance test cases to test rte_ring_xxx_elem APIs. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring_perf.c | 454 +++++++++++++++++++++++--------------- 1 file changed, 273 insertions(+), 181 deletions(-) -- 2.17.1 diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index 6c2aca483..8d1217951 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -13,6 +13,7 @@ #include #include "test.h" +#include "test_ring.h" /* * Ring @@ -41,6 +42,35 @@ struct lcore_pair { static volatile unsigned lcore_count = 0; +static void +test_ring_print_test_string(unsigned int api_type, int esize, + unsigned int bsz, double value) +{ + if (esize == -1) + printf("legacy APIs"); + else + printf("elem APIs: element size %dB", esize); + + if (api_type == TEST_RING_IGNORE_API_TYPE) + return; + + if ((api_type & TEST_RING_THREAD_DEF) == TEST_RING_THREAD_DEF) + printf(": default enqueue/dequeue: "); + else if ((api_type & TEST_RING_THREAD_SPSC) == TEST_RING_THREAD_SPSC) + printf(": SP/SC: "); + else if ((api_type & TEST_RING_THREAD_MPMC) == TEST_RING_THREAD_MPMC) + printf(": MP/MC: "); + + if ((api_type & TEST_RING_ELEM_SINGLE) == TEST_RING_ELEM_SINGLE) + printf("single: "); + else if ((api_type & TEST_RING_ELEM_BULK) == TEST_RING_ELEM_BULK) + printf("bulk (size: %u): ", bsz); + else if ((api_type & TEST_RING_ELEM_BURST) == TEST_RING_ELEM_BURST) + printf("burst (size: %u): ", bsz); + + printf("%.2F\n", value); +} + /**** Functions to analyse our core mask to get cores for different tests ***/ static int @@ -117,27 +147,21 @@ get_two_sockets(struct lcore_pair *lcp) /* Get cycle counts for dequeuing from an empty ring. Should be 2 or 3 cycles */ static void -test_empty_dequeue(struct rte_ring *r) +test_empty_dequeue(struct rte_ring *r, const int esize, + const unsigned int api_type) { - const unsigned iter_shift = 26; - const unsigned iterations = 1< enqueue + * flag == 1 -> dequeue */ -static int -enqueue_bulk(void *p) +static __rte_always_inline int +enqueue_dequeue_bulk_helper(const unsigned int flag, const int esize, + struct thread_params *p) { - const unsigned iter_shift = 23; - const unsigned iterations = 1<r; - const unsigned size = params->size; - unsigned i; - void *burst[MAX_BURST] = {0}; + int ret; + const unsigned int iter_shift = 23; + const unsigned int iterations = 1 << iter_shift; + struct rte_ring *r = p->r; + unsigned int bsize = p->size; + unsigned int i; + void *burst = NULL; #ifdef RTE_USE_C11_MEM_MODEL if (__atomic_add_fetch(&lcore_count, 1, __ATOMIC_RELAXED) != 2) @@ -173,23 +199,67 @@ enqueue_bulk(void *p) while(lcore_count != 2) rte_pause(); + burst = test_ring_calloc(MAX_BURST, esize); + if (burst == NULL) + return -1; + const uint64_t sp_start = rte_rdtsc(); for (i = 0; i < iterations; i++) - while (rte_ring_sp_enqueue_bulk(r, burst, size, NULL) == 0) - rte_pause(); + do { + if (flag == 0) + ret = test_ring_enqueue(r, burst, esize, bsize, + TEST_RING_THREAD_SPSC | + TEST_RING_ELEM_BULK); + else if (flag == 1) + ret = test_ring_dequeue(r, burst, esize, bsize, + TEST_RING_THREAD_SPSC | + TEST_RING_ELEM_BULK); + if (ret == 0) + rte_pause(); + } while (!ret); const uint64_t sp_end = rte_rdtsc(); const uint64_t mp_start = rte_rdtsc(); for (i = 0; i < iterations; i++) - while (rte_ring_mp_enqueue_bulk(r, burst, size, NULL) == 0) - rte_pause(); + do { + if (flag == 0) + ret = test_ring_enqueue(r, burst, esize, bsize, + TEST_RING_THREAD_MPMC | + TEST_RING_ELEM_BULK); + else if (flag == 1) + ret = test_ring_dequeue(r, burst, esize, bsize, + TEST_RING_THREAD_MPMC | + TEST_RING_ELEM_BULK); + if (ret == 0) + rte_pause(); + } while (!ret); const uint64_t mp_end = rte_rdtsc(); - params->spsc = ((double)(sp_end - sp_start))/(iterations*size); - params->mpmc = ((double)(mp_end - mp_start))/(iterations*size); + p->spsc = ((double)(sp_end - sp_start))/(iterations * bsize); + p->mpmc = ((double)(mp_end - mp_start))/(iterations * bsize); return 0; } +/* + * Function that uses rdtsc to measure timing for ring enqueue. Needs pair + * thread running dequeue_bulk function + */ +static int +enqueue_bulk(void *p) +{ + struct thread_params *params = p; + + return enqueue_dequeue_bulk_helper(0, -1, params); +} + +static int +enqueue_bulk_16B(void *p) +{ + struct thread_params *params = p; + + return enqueue_dequeue_bulk_helper(0, 16, params); +} + /* * Function that uses rdtsc to measure timing for ring dequeue. Needs pair * thread running enqueue_bulk function @@ -197,49 +267,38 @@ enqueue_bulk(void *p) static int dequeue_bulk(void *p) { - const unsigned iter_shift = 23; - const unsigned iterations = 1<r; - const unsigned size = params->size; - unsigned i; - void *burst[MAX_BURST] = {0}; - -#ifdef RTE_USE_C11_MEM_MODEL - if (__atomic_add_fetch(&lcore_count, 1, __ATOMIC_RELAXED) != 2) -#else - if (__sync_add_and_fetch(&lcore_count, 1) != 2) -#endif - while(lcore_count != 2) - rte_pause(); - const uint64_t sc_start = rte_rdtsc(); - for (i = 0; i < iterations; i++) - while (rte_ring_sc_dequeue_bulk(r, burst, size, NULL) == 0) - rte_pause(); - const uint64_t sc_end = rte_rdtsc(); + return enqueue_dequeue_bulk_helper(1, -1, params); +} - const uint64_t mc_start = rte_rdtsc(); - for (i = 0; i < iterations; i++) - while (rte_ring_mc_dequeue_bulk(r, burst, size, NULL) == 0) - rte_pause(); - const uint64_t mc_end = rte_rdtsc(); +static int +dequeue_bulk_16B(void *p) +{ + struct thread_params *params = p; - params->spsc = ((double)(sc_end - sc_start))/(iterations*size); - params->mpmc = ((double)(mc_end - mc_start))/(iterations*size); - return 0; + return enqueue_dequeue_bulk_helper(1, 16, params); } /* * Function that calls the enqueue and dequeue bulk functions on pairs of cores. * used to measure ring perf between hyperthreads, cores and sockets. */ -static void -run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, - lcore_function_t f1, lcore_function_t f2) +static int +run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, const int esize) { + lcore_function_t *f1, *f2; struct thread_params param1 = {0}, param2 = {0}; unsigned i; + + if (esize == -1) { + f1 = enqueue_bulk; + f2 = dequeue_bulk; + } else { + f1 = enqueue_bulk_16B; + f2 = dequeue_bulk_16B; + } + for (i = 0; i < sizeof(bulk_sizes)/sizeof(bulk_sizes[0]); i++) { lcore_count = 0; param1.size = param2.size = bulk_sizes[i]; @@ -251,14 +310,20 @@ run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, } else { rte_eal_remote_launch(f1, ¶m1, cores->c1); rte_eal_remote_launch(f2, ¶m2, cores->c2); - rte_eal_wait_lcore(cores->c1); - rte_eal_wait_lcore(cores->c2); + if (rte_eal_wait_lcore(cores->c1) < 0) + return -1; + if (rte_eal_wait_lcore(cores->c2) < 0) + return -1; } - printf("SP/SC bulk enq/dequeue (size: %u): %.2F\n", bulk_sizes[i], - param1.spsc + param2.spsc); - printf("MP/MC bulk enq/dequeue (size: %u): %.2F\n", bulk_sizes[i], - param1.mpmc + param2.mpmc); + test_ring_print_test_string( + TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BULK, + esize, bulk_sizes[i], param1.spsc + param2.spsc); + test_ring_print_test_string( + TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK, + esize, bulk_sizes[i], param1.mpmc + param2.mpmc); } + + return 0; } static rte_atomic32_t synchro; @@ -267,7 +332,7 @@ static uint64_t queue_count[RTE_MAX_LCORE]; #define TIME_MS 100 static int -load_loop_fn(void *p) +load_loop_fn_helper(struct thread_params *p, const int esize) { uint64_t time_diff = 0; uint64_t begin = 0; @@ -275,7 +340,11 @@ load_loop_fn(void *p) uint64_t lcount = 0; const unsigned int lcore = rte_lcore_id(); struct thread_params *params = p; - void *burst[MAX_BURST] = {0}; + void *burst = NULL; + + burst = test_ring_calloc(MAX_BURST, esize); + if (burst == NULL) + return -1; /* wait synchro for slaves */ if (lcore != rte_get_master_lcore()) @@ -284,22 +353,49 @@ load_loop_fn(void *p) begin = rte_get_timer_cycles(); while (time_diff < hz * TIME_MS / 1000) { - rte_ring_mp_enqueue_bulk(params->r, burst, params->size, NULL); - rte_ring_mc_dequeue_bulk(params->r, burst, params->size, NULL); + test_ring_enqueue(params->r, burst, esize, params->size, + TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK); + test_ring_dequeue(params->r, burst, esize, params->size, + TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK); lcount++; time_diff = rte_get_timer_cycles() - begin; } queue_count[lcore] = lcount; + + rte_free(burst); + return 0; } static int -run_on_all_cores(struct rte_ring *r) +load_loop_fn(void *p) +{ + struct thread_params *params = p; + + return load_loop_fn_helper(params, -1); +} + +static int +load_loop_fn_16B(void *p) +{ + struct thread_params *params = p; + + return load_loop_fn_helper(params, 16); +} + +static int +run_on_all_cores(struct rte_ring *r, const int esize) { uint64_t total = 0; struct thread_params param; + lcore_function_t *lcore_f; unsigned int i, c; + if (esize == -1) + lcore_f = load_loop_fn; + else + lcore_f = load_loop_fn_16B; + memset(¶m, 0, sizeof(struct thread_params)); for (i = 0; i < RTE_DIM(bulk_sizes); i++) { printf("\nBulk enq/dequeue count on size %u\n", bulk_sizes[i]); @@ -308,13 +404,12 @@ run_on_all_cores(struct rte_ring *r) /* clear synchro and start slaves */ rte_atomic32_set(&synchro, 0); - if (rte_eal_mp_remote_launch(load_loop_fn, ¶m, - SKIP_MASTER) < 0) + if (rte_eal_mp_remote_launch(lcore_f, ¶m, SKIP_MASTER) < 0) return -1; /* start synchro and launch test on master */ rte_atomic32_set(&synchro, 1); - load_loop_fn(¶m); + lcore_f(¶m); rte_eal_mp_wait_lcore(); @@ -335,155 +430,152 @@ run_on_all_cores(struct rte_ring *r) * Test function that determines how long an enqueue + dequeue of a single item * takes on a single lcore. Result is for comparison with the bulk enq+deq. */ -static void -test_single_enqueue_dequeue(struct rte_ring *r) +static int +test_single_enqueue_dequeue(struct rte_ring *r, const int esize, + const unsigned int api_type) { - const unsigned iter_shift = 24; - const unsigned iterations = 1<