From patchwork Thu Jan 11 15:00:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 124237 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp795976qgn; Thu, 11 Jan 2018 07:02:32 -0800 (PST) X-Google-Smtp-Source: ACJfBotqUPUqcjuLWPbxruZ++686cYc4YytQauthsOHDRVadrb6Yjxgl+U7pAikEmuniyMR1piMM X-Received: by 10.55.109.196 with SMTP id i187mr32167146qkc.57.1515682952800; Thu, 11 Jan 2018 07:02:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1515682952; cv=none; d=google.com; s=arc-20160816; b=bmsKhAkpPZzqegw5dpcjO70DmqTXKrjEtGfGgdQP54x2wY78tRY+/sm5Odfq22rPqA vaMNg/jySPLpRvsMA1Mt2y0tRx/g+Y6RSIOd4pBfDkeO221m4J6W+KCmaXlXBu5iD9KF I5O3nhQlC6YzLusEYNL9pPJF+4R9uGWZZGZNZkPee5trziRiQoHXH2Z179NJ1khxWku2 DWlzidJtcOn9a04xCfRItQy/kicXYcIX6SbcPYqQIbbTLL6wQZ4xyrkK5lmc5OpCvFIQ cmKXrPozJIgKyERDaTy/kgYe8s5Ldnu/iFDq4vQNRrum4Prk/noVAZC6tJieQpae+2w+ 16Jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to :arc-authentication-results; bh=pT3f6BW0bjbR+p5DAkJ7f7kWd6hnxcTxKh7chJ6kuFQ=; b=GTqWiZ61u/rWWFAuon8TfWc2lt4Z2o77fNub4Nwbsln2qj52NBg9XMBjDUq/YgWyNu 6TWJIqksCO3PN03Yi4Bkdg0xe4YLVpl9t5z978R7plI75hlC1Ig784iWJaoUqcHHc9/a SGyPoIvoCq3HI3WbjMTHUyvswwtWXDZWsQM7JjCeD/DOqWc3fVTZW0uFdoynbiXw3RTH yHL2mQbm6i4j5n261r+GIFUrz579WAxBtH9X1XChoY5vZNFAYXs8IJSTN9QF79fB92IK ig8gX4pg58ngTrLf7Dqe4hQJgiSXfR5teLVfjjKGw+76udNn0H3V8+QTuRoftAGb+Ty6 Ka+g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (ec2-54-197-127-237.compute-1.amazonaws.com. [54.197.127.237]) by mx.google.com with ESMTP id u1si2143481qkh.42.2018.01.11.07.02.30; Thu, 11 Jan 2018 07:02:32 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) client-ip=54.197.127.237; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id C50B561782; Thu, 11 Jan 2018 15:02:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,FREEMAIL_FROM, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2 autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id CE8E9617AF; Thu, 11 Jan 2018 15:00:56 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 2F7CF61506; Thu, 11 Jan 2018 15:00:34 +0000 (UTC) Received: from forward100j.mail.yandex.net (forward100j.mail.yandex.net [5.45.198.240]) by lists.linaro.org (Postfix) with ESMTPS id 0951761762 for ; Thu, 11 Jan 2018 15:00:25 +0000 (UTC) Received: from mxback7g.mail.yandex.net (mxback7g.mail.yandex.net [IPv6:2a02:6b8:0:1472:2741:0:8b7:168]) by forward100j.mail.yandex.net (Yandex) with ESMTP id 2142D5D833B8 for ; Thu, 11 Jan 2018 18:00:23 +0300 (MSK) Received: from smtp4o.mail.yandex.net (smtp4o.mail.yandex.net [2a02:6b8:0:1a2d::28]) by mxback7g.mail.yandex.net (nwsmtp/Yandex) with ESMTP id GUImFCDeHQ-0M0GRrnc; Thu, 11 Jan 2018 18:00:23 +0300 Received: by smtp4o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id 4ImPmxf1Zz-0Miqd9Ps; Thu, 11 Jan 2018 18:00:22 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Thu, 11 Jan 2018 18:00:18 +0300 Message-Id: <1515682819-12495-4-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515682819-12495-1-git-send-email-odpbot@yandex.ru> References: <1515682819-12495-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 353 Subject: [lng-odp] [PATCH API-NEXT v4 3/4] validation: queue: multi-thread plain queue test X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Petri Savolainen Test plain queue enqueue and dequeue with multiple concurrent threads. Test blocking and non-blocking lock-free implementations. Signed-off-by: Petri Savolainen --- /** Email created from pull request 353 (psavol:next-lockfree-queue-impl2) ** https://github.com/Linaro/odp/pull/353 ** Patch: https://github.com/Linaro/odp/pull/353.patch ** Base sha: 6303c7d0e98fafe0f14c8c4dd9989b3b7633ebf4 ** Merge commit sha: 065c75576263a97f76d1a47df24ee73cd18f54c5 **/ test/validation/api/queue/queue.c | 258 +++++++++++++++++++++++++++++++++++++- 1 file changed, 257 insertions(+), 1 deletion(-) diff --git a/test/validation/api/queue/queue.c b/test/validation/api/queue/queue.c index 1ff029176..59a917c08 100644 --- a/test/validation/api/queue/queue.c +++ b/test/validation/api/queue/queue.c @@ -14,6 +14,22 @@ #define MAX_NUM_EVENT (1 * 1024) #define MAX_ITERATION (100) #define MAX_QUEUES (64 * 1024) +#define GLOBALS_NAME "queue_test_globals" +#define DEQ_RETRIES 100 +#define ENQ_RETRIES 100 + +typedef struct { + pthrd_arg cu_thr; + int num_workers; + odp_barrier_t barrier; + odp_queue_t queue; + odp_atomic_u32_t num_event; + + struct { + uint32_t num_event; + } thread[ODP_THREAD_COUNT_MAX]; + +} test_globals_t; static int queue_context = 0xff; static odp_pool_t pool; @@ -31,7 +47,30 @@ static void generate_name(char *name, uint32_t index) int queue_suite_init(void) { + odp_shm_t shm; + test_globals_t *globals; odp_pool_param_t params; + int num_workers; + odp_cpumask_t mask; + + shm = odp_shm_reserve(GLOBALS_NAME, sizeof(test_globals_t), + ODP_CACHE_LINE_SIZE, 0); + + if (shm == ODP_SHM_INVALID) { + printf("Shared memory reserve failed\n"); + return -1; + } + + globals = odp_shm_addr(shm); + memset(globals, 0, sizeof(test_globals_t)); + + num_workers = odp_cpumask_default_worker(&mask, 0); + + if (num_workers > MAX_WORKERS) + num_workers = MAX_WORKERS; + + globals->num_workers = num_workers; + odp_barrier_init(&globals->barrier, num_workers); odp_pool_param_init(¶ms); @@ -51,7 +90,25 @@ int queue_suite_init(void) int queue_suite_term(void) { - return odp_pool_destroy(pool); + odp_shm_t shm; + + shm = odp_shm_lookup(GLOBALS_NAME); + if (shm == ODP_SHM_INVALID) { + printf("SHM lookup failed.\n"); + return -1; + } + + if (odp_shm_free(shm)) { + printf("SHM free failed.\n"); + return -1; + } + + if (odp_pool_destroy(pool)) { + printf("Pool destroy failed.\n"); + return -1; + } + + return 0; } void queue_test_capa(void) @@ -411,12 +468,211 @@ void queue_test_info(void) CU_ASSERT(odp_queue_destroy(q_order) == 0); } +static uint32_t alloc_and_enqueue(odp_queue_t queue, odp_pool_t pool, + uint32_t num) +{ + uint32_t i, ret; + odp_buffer_t buf; + odp_event_t ev; + + for (i = 0; i < num; i++) { + buf = odp_buffer_alloc(pool); + + CU_ASSERT(buf != ODP_BUFFER_INVALID); + + ev = odp_buffer_to_event(buf); + + ret = odp_queue_enq(queue, ev); + + CU_ASSERT(ret == 0); + + if (ret) + break; + } + + return i; +} + +static uint32_t dequeue_and_free_all(odp_queue_t queue) +{ + odp_event_t ev; + uint32_t num, retries; + + num = 0; + retries = 0; + + while (1) { + ev = odp_queue_deq(queue); + + if (ev == ODP_EVENT_INVALID) { + if (retries >= DEQ_RETRIES) + return num; + + retries++; + continue; + } + + retries = 0; + num++; + + odp_event_free(ev); + } + + return num; +} + +static int enqueue_with_retry(odp_queue_t queue, odp_event_t ev) +{ + int i; + + for (i = 0; i < ENQ_RETRIES; i++) + if (odp_queue_enq(queue, ev) == 0) + return 0; + + return -1; +} + +static int queue_test_worker(void *arg) +{ + uint32_t num, retries, num_workers; + int thr_id, ret; + odp_event_t ev; + odp_queue_t queue; + test_globals_t *globals = arg; + + thr_id = odp_thread_id(); + queue = globals->queue; + num_workers = globals->num_workers; + + if (num_workers > 1) + odp_barrier_wait(&globals->barrier); + + retries = 0; + num = odp_atomic_fetch_inc_u32(&globals->num_event); + + /* On average, each worker deq-enq each event once */ + while (num < (num_workers * MAX_NUM_EVENT)) { + ev = odp_queue_deq(queue); + + if (ev == ODP_EVENT_INVALID) { + if (retries < DEQ_RETRIES) { + retries++; + continue; + } + + /* Prevent thread to starve */ + num = odp_atomic_fetch_inc_u32(&globals->num_event); + retries = 0; + continue; + } + + globals->thread[thr_id].num_event++; + + ret = enqueue_with_retry(queue, ev); + + CU_ASSERT(ret == 0); + + num = odp_atomic_fetch_inc_u32(&globals->num_event); + } + + return 0; +} + +static void reset_thread_stat(test_globals_t *globals) +{ + int i; + + odp_atomic_init_u32(&globals->num_event, 0); + + for (i = 0; i < ODP_THREAD_COUNT_MAX; i++) + globals->thread[i].num_event = 0; +} + +static void multithread_test(odp_nonblocking_t nonblocking) +{ + odp_shm_t shm; + test_globals_t *globals; + odp_queue_t queue; + odp_queue_param_t qparams; + odp_queue_capability_t capa; + uint32_t queue_size, max_size; + uint32_t num, sum, num_free, i; + + CU_ASSERT(odp_queue_capability(&capa) == 0); + + queue_size = 2 * MAX_NUM_EVENT; + + max_size = capa.plain.max_size; + + if (nonblocking == ODP_NONBLOCKING_LF) { + if (capa.plain.lockfree.max_num == 0) + return; + + max_size = capa.plain.lockfree.max_size; + } + + if (max_size && queue_size > max_size) + queue_size = max_size; + + num = MAX_NUM_EVENT; + + if (num > queue_size) + num = queue_size / 2; + + shm = odp_shm_lookup(GLOBALS_NAME); + CU_ASSERT_FATAL(shm != ODP_SHM_INVALID); + + globals = odp_shm_addr(shm); + globals->cu_thr.numthrds = globals->num_workers; + + odp_queue_param_init(&qparams); + qparams.type = ODP_QUEUE_TYPE_PLAIN; + qparams.size = queue_size; + qparams.nonblocking = nonblocking; + + queue = odp_queue_create("queue_test_mt", &qparams); + CU_ASSERT_FATAL(queue != ODP_QUEUE_INVALID); + + globals->queue = queue; + reset_thread_stat(globals); + + CU_ASSERT(alloc_and_enqueue(queue, pool, num) == num); + + odp_cunit_thread_create(queue_test_worker, (pthrd_arg *)globals); + + /* Wait for worker threads to terminate */ + odp_cunit_thread_exit((pthrd_arg *)globals); + + sum = 0; + for (i = 0; i < ODP_THREAD_COUNT_MAX; i++) + sum += globals->thread[i].num_event; + + CU_ASSERT(sum != 0); + + num_free = dequeue_and_free_all(queue); + + CU_ASSERT(num_free == num); + CU_ASSERT(odp_queue_destroy(queue) == 0); +} + +static void queue_test_mt_plain_block(void) +{ + multithread_test(ODP_BLOCKING); +} + +static void queue_test_mt_plain_nonblock_lf(void) +{ + multithread_test(ODP_NONBLOCKING_LF); +} + odp_testinfo_t queue_suite[] = { ODP_TEST_INFO(queue_test_capa), ODP_TEST_INFO(queue_test_mode), ODP_TEST_INFO(queue_test_lockfree), ODP_TEST_INFO(queue_test_param), ODP_TEST_INFO(queue_test_info), + ODP_TEST_INFO(queue_test_mt_plain_block), + ODP_TEST_INFO(queue_test_mt_plain_nonblock_lf), ODP_TEST_INFO_NULL, };