From patchwork Thu Nov 20 19:02:24 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ciprian Barbu X-Patchwork-Id: 41260 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f71.google.com (mail-ee0-f71.google.com [74.125.83.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id AF77E25AEA for ; Thu, 20 Nov 2014 19:02:56 +0000 (UTC) Received: by mail-ee0-f71.google.com with SMTP id c13sf2444817eek.10 for ; Thu, 20 Nov 2014 11:02:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:subject :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=kbYfNbPRRDx8DDxlQ9Tq3lxiHdnGSYKAXXPyM9b+B00=; b=YhRzOMbSEbALPi9uZi0UuKCc7oCivlZkqeHUyffIeyZYWaDMP0nlOPyD0jGYe0wajm YbkNjVkKGT9IVjf+8vMnqWrz8zUegvh/X95wp+iVXecwCyhBBkWQtItxM8CcEredlfOQ nt82wb//fl/gLo4RPL8SsjSQIyoebv2EJXQnyFR4Y8mRjy89UgYT56u2C56fWoKAyFT6 aMR1fAgmeP+Lot6DHpDcS1D94n4a2fZOJtmebocws5ZFaY09Q93Hv6/wRn2lfVe4BxoB MfBIm5MwR5RtFeaHcnwgY6ocYHqr19uvPM9p6SL72Fyu8r/sQvdcNMbJLxaB6fqTCUrI bRDw== X-Gm-Message-State: ALoCoQnoAXZBBwXwpXAAX2oCb5wAddk0/LJDK8dH74avqUPIaUGR01LonLDtzS2WslZgB4Lomxdj X-Received: by 10.112.163.229 with SMTP id yl5mr15372lbb.23.1416510175945; Thu, 20 Nov 2014 11:02:55 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.22.130 with SMTP id d2ls113641laf.32.gmail; Thu, 20 Nov 2014 11:02:55 -0800 (PST) X-Received: by 10.152.20.130 with SMTP id n2mr12937491lae.39.1416510175682; Thu, 20 Nov 2014 11:02:55 -0800 (PST) Received: from mail-lb0-f177.google.com (mail-lb0-f177.google.com. [209.85.217.177]) by mx.google.com with ESMTPS id o7si2929340lbp.45.2014.11.20.11.02.55 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 20 Nov 2014 11:02:55 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.177 as permitted sender) client-ip=209.85.217.177; Received: by mail-lb0-f177.google.com with SMTP id z12so2737992lbi.22 for ; Thu, 20 Nov 2014 11:02:55 -0800 (PST) X-Received: by 10.152.42.226 with SMTP id r2mr12950031lal.29.1416510175312; Thu, 20 Nov 2014 11:02:55 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp320006lbc; Thu, 20 Nov 2014 11:02:54 -0800 (PST) X-Received: by 10.170.73.197 with SMTP id p188mr48877136ykp.3.1416510173303; Thu, 20 Nov 2014 11:02:53 -0800 (PST) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id c10si3625197qar.66.2014.11.20.11.02.52 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 20 Nov 2014 11:02:53 -0800 (PST) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1XrX0E-0005JM-MD; Thu, 20 Nov 2014 19:02:50 +0000 Received: from mail-lb0-f178.google.com ([209.85.217.178]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1XrX08-0005JH-UM for lng-odp@lists.linaro.org; Thu, 20 Nov 2014 19:02:45 +0000 Received: by mail-lb0-f178.google.com with SMTP id f15so1417285lbj.23 for ; Thu, 20 Nov 2014 11:02:39 -0800 (PST) X-Received: by 10.112.42.198 with SMTP id q6mr12798285lbl.69.1416510158998; Thu, 20 Nov 2014 11:02:38 -0800 (PST) Received: from cipriantemp.enea.se (sestofw01.enea.se. [192.36.1.252]) by mx.google.com with ESMTPSA id m3sm723840laa.10.2014.11.20.11.02.37 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 20 Nov 2014 11:02:38 -0800 (PST) From: Ciprian Barbu To: lng-odp@lists.linaro.org Date: Thu, 20 Nov 2014 21:02:24 +0200 Message-Id: <1416510144-24926-1-git-send-email-ciprian.barbu@linaro.org> X-Mailer: git-send-email 1.8.3.2 Subject: [lng-odp] [RFC] cunit: add tests for scheduler API X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ciprian.barbu@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.177 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Ciprian Barbu --- The testcases are based almost entirely on the odp_example. There are no alloc tests and I added a test case for odp_schedule_wait_time. The major differencs between the odp_example and this cunit is the partition into testcases, the odp_example calls every test case from one big function. I had to work some magic in order to be able to pass arguments to test cases, I hope is not too hard to follow. configure.ac | 1 + test/cunit/Makefile.am | 2 + test/cunit/schedule/Makefile.am | 10 + test/cunit/schedule/odp_schedule_test.c | 844 ++++++++++++++++++++++++++ test/cunit/schedule/odp_schedule_testsuites.c | 35 ++ test/cunit/schedule/odp_schedule_testsuites.h | 21 + 6 files changed, 913 insertions(+) create mode 100644 test/cunit/schedule/Makefile.am create mode 100644 test/cunit/schedule/odp_schedule_test.c create mode 100644 test/cunit/schedule/odp_schedule_testsuites.c create mode 100644 test/cunit/schedule/odp_schedule_testsuites.h diff --git a/configure.ac b/configure.ac index fcd7279..a47db72 100644 --- a/configure.ac +++ b/configure.ac @@ -173,6 +173,7 @@ AC_CONFIG_FILES([Makefile test/Makefile test/api_test/Makefile test/cunit/Makefile + test/cunit/schedule/Makefile pkgconfig/libodp.pc]) AC_SEARCH_LIBS([timer_create],[rt posix4]) diff --git a/test/cunit/Makefile.am b/test/cunit/Makefile.am index 439e134..b6033ee 100644 --- a/test/cunit/Makefile.am +++ b/test/cunit/Makefile.am @@ -3,6 +3,8 @@ include $(top_srcdir)/test/Makefile.inc AM_CFLAGS += -I$(CUNIT_PATH)/include AM_LDFLAGS += -L$(CUNIT_PATH)/lib -static -lcunit +SUBDIRS = schedule + if ODP_CUNIT_ENABLED TESTS = ${bin_PROGRAMS} check_PROGRAMS = ${bin_PROGRAMS} diff --git a/test/cunit/schedule/Makefile.am b/test/cunit/schedule/Makefile.am new file mode 100644 index 0000000..ad68b03 --- /dev/null +++ b/test/cunit/schedule/Makefile.am @@ -0,0 +1,10 @@ +include $(top_srcdir)/test/Makefile.inc + +if ODP_CUNIT_ENABLED +bin_PROGRAMS = odp_schedule_test +odp_schedule_test_LDFLAGS = $(AM_LDFLAGS) -L$(CUNIT_PATH)/lib -static -lcunit +odp_schedule_test_CFLAGS = $(AM_CFLAGS) -I$(CUNIT_PATH)/include +endif + +dist_odp_schedule_test_SOURCES = odp_schedule_test.c \ + odp_schedule_testsuites.c diff --git a/test/cunit/schedule/odp_schedule_test.c b/test/cunit/schedule/odp_schedule_test.c new file mode 100644 index 0000000..fa67f6e --- /dev/null +++ b/test/cunit/schedule/odp_schedule_test.c @@ -0,0 +1,844 @@ +/* Copyright (c) 2014, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#include "odp_schedule_testsuites.h" +#include + +#define MAX_WORKERS 32 /**< Max worker threads */ +#define MSG_POOL_SIZE (4*1024*1024) +#define QUEUES_PER_PRIO 64 /**< Queue per priority */ +#define QUEUE_ROUNDS (512*1024) /**< Queue test rounds */ +#define MULTI_BUFS_MAX 4 /**< Buffer burst size */ +#define BUF_SIZE 64 + +#define SCHED_MSG "Test_buff_FOR_simple_schedule" + +/** Test arguments */ +typedef struct { + int core_count; /**< Core count */ + int proc_mode; /**< Process mode */ +} test_args_t; + +typedef int (*test_case_routine)(const char *, int, odp_buffer_pool_t, + int, odp_barrier_t *); + +/** Scheduler test case arguments */ +typedef struct { + char name[64]; /**< test case name */ + int prio; + test_case_routine func; +} test_case_args_t; + +/** Test global variables */ +typedef struct { + odp_barrier_t barrier;/**< @private Barrier for test synchronisation */ + test_args_t test_args;/**< @private Test case function and arguments */ +} test_globals_t; + +static void execute_parallel(void *(*func) (void *), test_case_args_t *); +static int num_workers; + +/** + * @internal CUnit test case for verifying functionality of + * schedule_wait_time + */ +static void schedule_wait_time(void) +{ + uint64_t wait_time; + + wait_time = odp_schedule_wait_time(0); + CU_ASSERT(wait_time > 0); + CU_PASS("schedule_wait_time(0)"); + + wait_time = odp_schedule_wait_time(1); + CU_ASSERT(wait_time > 0); + CU_PASS("schedule_wait_time(1)"); + + wait_time = odp_schedule_wait_time((uint64_t)-1LL); + CU_ASSERT(wait_time > 0); + CU_PASS("schedule_wait_time(MAX_LONG_INT)"); +} + +/** + * @internal Clear all scheduled queues. Retry to be sure that all + * buffers have been scheduled. + */ +static void clear_sched_queues(void) +{ + odp_buffer_t buf; + + while (1) { + buf = odp_schedule(NULL, ODP_SCHED_NO_WAIT); + + if (buf == ODP_BUFFER_INVALID) + break; + + odp_buffer_free(buf); + } +} + +/** + * @internal Create multiple queues from a pool of buffers + * + * @param thr Thread + * @param msg_pool Buffer pool + * @param prio Queue priority + * + * @return 0 if successful + */ +static int create_queues(int thr, odp_buffer_pool_t msg_pool, int prio) +{ + char name[] = "sched_XX_YY"; + odp_buffer_t buf; + odp_queue_t queue; + int i; + + name[6] = '0' + prio/10; + name[7] = '0' + prio - 10*(prio/10); + + /* Alloc and enqueue a buffer per queue */ + for (i = 0; i < QUEUES_PER_PRIO; i++) { + name[9] = '0' + i/10; + name[10] = '0' + i - 10*(i/10); + + queue = odp_queue_lookup(name); + + if (queue == ODP_QUEUE_INVALID) { + ODP_ERR(" [%i] Queue %s lookup failed.\n", thr, name); + return -1; + } + + buf = odp_buffer_alloc(msg_pool); + + if (!odp_buffer_is_valid(buf)) { + ODP_ERR(" [%i] msg_pool alloc failed\n", thr); + return -1; + } + + if (odp_queue_enq(queue, buf)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + } + + return 0; +} + +/** + * @internal Create a single queue from a pool of buffers + * + * @param thr Thread + * @param msg_pool Buffer pool + * @param prio Queue priority + * + * @return 0 if successful + */ +static int create_queue(int thr, odp_buffer_pool_t msg_pool, int prio) +{ + char name[] = "sched_XX_00"; + odp_buffer_t buf; + odp_queue_t queue; + + buf = odp_buffer_alloc(msg_pool); + + if (!odp_buffer_is_valid(buf)) { + ODP_ERR(" [%i] msg_pool alloc failed\n", thr); + return -1; + } + + name[6] = '0' + prio/10; + name[7] = '0' + prio - 10*(prio/10); + + queue = odp_queue_lookup(name); + + if (queue == ODP_QUEUE_INVALID) { + ODP_ERR(" [%i] Queue %s lookup failed.\n", thr, name); + return -1; + } + + if (odp_queue_enq(queue, buf)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + + return 0; +} + +/** + * @internal Test scheduling of a single queue - with odp_schedule_one() + * + * Enqueue a buffer to the shared queue. Schedule and enqueue the received + * buffer back into the queue. + * + * @param str Test case name string + * @param thr Thread + * @param msg_pool Buffer pool + * @param prio Priority + * @param barrier Barrier + * + * @return 0 if successful + */ +static int test_schedule_one_single(const char *str, int thr, + odp_buffer_pool_t msg_pool, + int prio, odp_barrier_t *barrier) +{ + odp_buffer_t buf; + odp_queue_t queue; + uint64_t t1, t2, cycles, ns; + uint32_t i; + uint32_t tot = 0; + + if (create_queue(thr, msg_pool, prio)) { + CU_FAIL_FATAL("lookup queue"); + return -1; + } + + t1 = odp_time_get_cycles(); + + for (i = 0; i < QUEUE_ROUNDS; i++) { + buf = odp_schedule_one(&queue, ODP_SCHED_WAIT); + + if (odp_queue_enq(queue, buf)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + } + + if (odp_queue_sched_type(queue) == ODP_SCHED_SYNC_ATOMIC) + odp_schedule_release_atomic(); + + t2 = odp_time_get_cycles(); + cycles = odp_time_diff_cycles(t1, t2); + ns = odp_time_cycles_to_ns(cycles); + tot = i; + + odp_barrier_sync(barrier); + clear_sched_queues(); + + cycles = cycles/tot; + ns = ns/tot; + + printf(" [%i] %s enq+deq %"PRIu64" cycles, %"PRIu64" ns\n", + thr, str, cycles, ns); + + return 0; +} + +/** + * @internal Test scheduling of multiple queues - with odp_schedule_one() + * + * Enqueue a buffer to each queue. Schedule and enqueue the received + * buffer back into the queue it came from. + * + * @param str Test case name string + * @param thr Thread + * @param msg_pool Buffer pool + * @param prio Priority + * @param barrier Barrier + * + * @return 0 if successful + */ +static int test_schedule_one_many(const char *str, int thr, + odp_buffer_pool_t msg_pool, + int prio, odp_barrier_t *barrier) +{ + odp_buffer_t buf; + odp_queue_t queue; + uint64_t t1 = 0; + uint64_t t2 = 0; + uint64_t cycles, ns; + uint32_t i; + uint32_t tot = 0; + + if (create_queues(thr, msg_pool, prio)) + return -1; + + /* Start sched-enq loop */ + t1 = odp_time_get_cycles(); + + for (i = 0; i < QUEUE_ROUNDS; i++) { + buf = odp_schedule_one(&queue, ODP_SCHED_WAIT); + + if (odp_queue_enq(queue, buf)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + } + + if (odp_queue_sched_type(queue) == ODP_SCHED_SYNC_ATOMIC) + odp_schedule_release_atomic(); + + t2 = odp_time_get_cycles(); + cycles = odp_time_diff_cycles(t1, t2); + ns = odp_time_cycles_to_ns(cycles); + tot = i; + + odp_barrier_sync(barrier); + clear_sched_queues(); + + cycles = cycles/tot; + ns = ns/tot; + + printf(" [%i] %s enq+deq %"PRIu64" cycles, %"PRIu64" ns\n", + thr, str, cycles, ns); + + return 0; +} + +/** + * @internal Test scheduling of a single queue - with odp_schedule() + * + * Enqueue a buffer to the shared queue. Schedule and enqueue the received + * buffer back into the queue. + * + * @param str Test case name string + * @param thr Thread + * @param msg_pool Buffer pool + * @param prio Priority + * @param barrier Barrier + * + * @return 0 if successful + */ +static int test_schedule_single(const char *str, int thr, + odp_buffer_pool_t msg_pool, + int prio, odp_barrier_t *barrier) +{ + odp_buffer_t buf; + odp_queue_t queue; + uint64_t t1, t2, cycles, ns; + uint32_t i; + uint32_t tot = 0; + + if (create_queue(thr, msg_pool, prio)) + return -1; + + t1 = odp_time_get_cycles(); + + for (i = 0; i < QUEUE_ROUNDS; i++) { + buf = odp_schedule(&queue, ODP_SCHED_WAIT); + + if (odp_queue_enq(queue, buf)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + } + + /* Clear possible locally stored buffers */ + odp_schedule_pause(); + + tot = i; + + while (1) { + buf = odp_schedule(&queue, ODP_SCHED_NO_WAIT); + + if (buf == ODP_BUFFER_INVALID) + break; + + tot++; + + if (odp_queue_enq(queue, buf)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + } + + odp_schedule_resume(); + + t2 = odp_time_get_cycles(); + cycles = odp_time_diff_cycles(t1, t2); + ns = odp_time_cycles_to_ns(cycles); + + odp_barrier_sync(barrier); + clear_sched_queues(); + + cycles = cycles/tot; + ns = ns/tot; + + printf(" [%i] %s enq+deq %"PRIu64" cycles, %"PRIu64" ns\n", + thr, str, cycles, ns); + + return 0; +} + +/** + * @internal Test scheduling of multiple queues - with odp_schedule() + * + * Enqueue a buffer to each queue. Schedule and enqueue the received + * buffer back into the queue it came from. + * + * @param str Test case name string + * @param thr Thread + * @param msg_pool Buffer pool + * @param prio Priority + * @param barrier Barrier + * + * @return 0 if successful + */ +static int test_schedule_many(const char *str, int thr, + odp_buffer_pool_t msg_pool, + int prio, odp_barrier_t *barrier) +{ + odp_buffer_t buf; + odp_queue_t queue; + uint64_t t1 = 0; + uint64_t t2 = 0; + uint64_t cycles, ns; + uint32_t i; + uint32_t tot = 0; + + if (create_queues(thr, msg_pool, prio)) + return -1; + + /* Start sched-enq loop */ + t1 = odp_time_get_cycles(); + + for (i = 0; i < QUEUE_ROUNDS; i++) { + buf = odp_schedule(&queue, ODP_SCHED_WAIT); + + if (odp_queue_enq(queue, buf)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + } + + /* Clear possible locally stored buffers */ + odp_schedule_pause(); + + tot = i; + + while (1) { + buf = odp_schedule(&queue, ODP_SCHED_NO_WAIT); + + if (buf == ODP_BUFFER_INVALID) + break; + + tot++; + + if (odp_queue_enq(queue, buf)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + } + + odp_schedule_resume(); + + t2 = odp_time_get_cycles(); + cycles = odp_time_diff_cycles(t1, t2); + ns = odp_time_cycles_to_ns(cycles); + + odp_barrier_sync(barrier); + clear_sched_queues(); + + cycles = cycles/tot; + ns = ns/tot; + + printf(" [%i] %s enq+deq %"PRIu64" cycles, %"PRIu64" ns\n", + thr, str, cycles, ns); + + return 0; +} + +/** + * @internal Test scheduling of multiple queues with multi_sched and multi_enq + * + * @param str Test case name string + * @param thr Thread + * @param msg_pool Buffer pool + * @param prio Priority + * @param barrier Barrier + * + * @return 0 if successful + */ +static int test_schedule_multi(const char *str, int thr, + odp_buffer_pool_t msg_pool, + int prio, odp_barrier_t *barrier) +{ + odp_buffer_t buf[MULTI_BUFS_MAX]; + odp_queue_t queue; + uint64_t t1 = 0; + uint64_t t2 = 0; + uint64_t cycles, ns; + int i, j; + int num; + uint32_t tot = 0; + char name[] = "sched_XX_YY"; + + name[6] = '0' + prio/10; + name[7] = '0' + prio - 10*(prio/10); + + /* Alloc and enqueue a buffer per queue */ + for (i = 0; i < QUEUES_PER_PRIO; i++) { + name[9] = '0' + i/10; + name[10] = '0' + i - 10*(i/10); + + queue = odp_queue_lookup(name); + + if (queue == ODP_QUEUE_INVALID) { + ODP_ERR(" [%i] Queue %s lookup failed.\n", thr, name); + return -1; + } + + for (j = 0; j < MULTI_BUFS_MAX; j++) { + buf[j] = odp_buffer_alloc(msg_pool); + + if (!odp_buffer_is_valid(buf[j])) { + ODP_ERR(" [%i] msg_pool alloc failed\n", thr); + return -1; + } + } + + if (odp_queue_enq_multi(queue, buf, MULTI_BUFS_MAX)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + } + + /* Start sched-enq loop */ + t1 = odp_time_get_cycles(); + + for (i = 0; i < QUEUE_ROUNDS; i++) { + num = odp_schedule_multi(&queue, ODP_SCHED_WAIT, buf, + MULTI_BUFS_MAX); + + tot += num; + + if (odp_queue_enq_multi(queue, buf, num)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + } + + /* Clear possible locally stored buffers */ + odp_schedule_pause(); + + while (1) { + num = odp_schedule_multi(&queue, ODP_SCHED_NO_WAIT, buf, + MULTI_BUFS_MAX); + + if (num == 0) + break; + + tot += num; + + if (odp_queue_enq_multi(queue, buf, num)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + } + + odp_schedule_resume(); + + + t2 = odp_time_get_cycles(); + cycles = odp_time_diff_cycles(t1, t2); + ns = odp_time_cycles_to_ns(cycles); + + odp_barrier_sync(barrier); + clear_sched_queues(); + + if (tot) { + cycles = cycles/tot; + ns = ns/tot; + } else { + cycles = 0; + ns = 0; + } + + printf(" [%i] %s enq+deq %"PRIu64" cycles, %"PRIu64" ns\n", + thr, str, cycles, ns); + + return 0; +} + +/** + * Template function for running the scheduler tests. + * The main reason for having this function is that CUnit does not offer a way + * to pass arguments to a testcase function. + * The other reason is that there are common steps for all testcases. + */ +static void *exec_template(void *arg) +{ + odp_buffer_pool_t msg_pool; + odp_shm_t shm; + test_globals_t *globals; + odp_barrier_t *barrier; + test_case_args_t *args = (test_case_args_t*) arg; + + shm = odp_shm_lookup("test_globals"); + globals = odp_shm_addr(shm); + + CU_ASSERT(globals != NULL); + + barrier = &globals->barrier; + + /* + * Sync before start + */ + odp_barrier_sync(barrier); + + /* + * Find the buffer pool + */ + msg_pool = odp_buffer_pool_lookup("msg_pool"); + + CU_ASSERT(msg_pool != ODP_BUFFER_POOL_INVALID); + + odp_barrier_sync(barrier); + + /* + * Now run the testcase routine passing the arguments + */ + args->func(args->name, odp_thread_id(), msg_pool, + args->prio, barrier); + + return arg; +} + +/* Low prio */ + +static void schedule_one_single_lo(void) +{ + test_case_args_t args; + snprintf(args.name, sizeof(args.name), "sched_one_s_lo"); + args.prio = ODP_SCHED_PRIO_LOWEST; + args.func = test_schedule_one_single; + execute_parallel(exec_template, &args); +} + +static void schedule_single_lo(void) +{ + test_case_args_t args; + snprintf(args.name, sizeof(args.name), "sched_____s_lo"); + args.prio = ODP_SCHED_PRIO_LOWEST; + args.func = test_schedule_single; + execute_parallel(exec_template, &args); +} + +static void schedule_one_many_lo(void) +{ + test_case_args_t args; + snprintf(args.name, sizeof(args.name), "sched_one_m_lo"); + args.prio = ODP_SCHED_PRIO_LOWEST; + args.func = test_schedule_one_many; + execute_parallel(exec_template, &args); +} + +static void schedule_many_lo(void) +{ + test_case_args_t args; + snprintf(args.name, sizeof(args.name), "sched_____m_lo"); + args.prio = ODP_SCHED_PRIO_LOWEST; + args.func = test_schedule_many; + execute_parallel(exec_template, &args); +} + +static void schedule_multi_lo(void) +{ + test_case_args_t args; + snprintf(args.name, sizeof(args.name), "sched_____m_lo"); + args.prio = ODP_SCHED_PRIO_LOWEST; + args.func = test_schedule_multi; + execute_parallel(exec_template, &args); +} + +/* High prio */ + +static void schedule_one_single_hi(void) +{ + test_case_args_t args; + snprintf(args.name, sizeof(args.name), "sched_one_s_hi"); + args.prio = ODP_SCHED_PRIO_HIGHEST; + args.func = test_schedule_single; + execute_parallel(exec_template, &args); +} + +static void schedule_single_hi(void) +{ + test_case_args_t args; + snprintf(args.name, sizeof(args.name), "sched_____s_hi"); + args.prio = ODP_SCHED_PRIO_HIGHEST; + args.func = test_schedule_single; + execute_parallel(exec_template, &args); +} + +static void schedule_one_many_hi(void) +{ + test_case_args_t args; + snprintf(args.name, sizeof(args.name), "sched_one_m_hi"); + args.prio = ODP_SCHED_PRIO_HIGHEST; + args.func = test_schedule_one_many; + execute_parallel(exec_template, &args); +} + +static void schedule_many_hi(void) +{ + test_case_args_t args; + snprintf(args.name, sizeof(args.name), "sched_____m_hi"); + args.prio = ODP_SCHED_PRIO_HIGHEST; + args.func = test_schedule_many; + execute_parallel(exec_template, &args); +} + +static void schedule_multi_hi(void) +{ + test_case_args_t args; + snprintf(args.name, sizeof(args.name), "sched_multi_hi"); + args.prio = ODP_SCHED_PRIO_HIGHEST; + args.func = test_schedule_multi; + execute_parallel(exec_template, &args); +} + +static void execute_parallel(void *(*start_routine) (void *), + test_case_args_t *test_case_args) +{ + odph_linux_pthread_t thread_tbl[MAX_WORKERS]; + int first_core; + + memset(thread_tbl, 0, sizeof(thread_tbl)); + + /* + * By default core #0 runs Linux kernel background tasks. + * Start mapping thread from core #1 + */ + first_core = 1; + + if (odp_sys_core_count() == 1) + first_core = 0; + + odph_linux_pthread_create(thread_tbl, num_workers, first_core, + start_routine, test_case_args); + + /* Wait for worker threads to terminate */ + odph_linux_pthread_join(thread_tbl, num_workers); +} + +static odp_buffer_pool_t test_odp_buffer_pool_init(void) +{ + void *pool_base; + odp_shm_t shm; + odp_buffer_pool_t pool; + + shm = odp_shm_reserve("msg_pool", + MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); + + pool_base = odp_shm_addr(shm); + + if (NULL == pool_base) { + printf("Shared memory reserve failed.\n"); + return -1; + } + + pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, + BUF_SIZE, ODP_CACHE_LINE_SIZE, + ODP_BUFFER_TYPE_RAW); + + if (ODP_BUFFER_POOL_INVALID == pool) { + printf("Pool create failed.\n"); + return -1; + } + return pool; +} + +int schedule_test_init(void) +{ + test_args_t args; + odp_shm_t shm; + test_globals_t *globals; + int i, j; + int prios; + + if (0 != odp_init_global(NULL, NULL)) { + printf("odp_init_global fail.\n"); + return -1; + } + if (0 != odp_init_local()) { + printf("odp_init_local fail.\n"); + return -1; + } + if (ODP_BUFFER_POOL_INVALID == test_odp_buffer_pool_init()) { + printf("test_odp_buffer_pool_init fail.\n"); + return -1; + } + + /* A worker thread per core */ + num_workers = odp_sys_core_count(); + + if (args.core_count) + num_workers = args.core_count; + + /* force to max core count */ + if (num_workers > MAX_WORKERS) + num_workers = MAX_WORKERS; + shm = odp_shm_reserve("test_globals", + sizeof(test_globals_t), ODP_CACHE_LINE_SIZE, 0); + + globals = odp_shm_addr(shm); + + if (globals == NULL) { + ODP_ERR("Shared memory reserve failed.\n"); + return -1; + } + + memset(globals, 0, sizeof(test_globals_t)); + + /* Barrier to sync test case execution */ + odp_barrier_init_count(&globals->barrier, num_workers); + + prios = odp_schedule_num_prio(); + + for (i = 0; i < prios; i++) { + odp_queue_param_t param; + odp_queue_t queue; + char name[] = "sched_XX_YY"; + + if (i != ODP_SCHED_PRIO_HIGHEST && + i != ODP_SCHED_PRIO_LOWEST) + continue; + + name[6] = '0' + i/10; + name[7] = '0' + i - 10*(i/10); + + param.sched.prio = i; + param.sched.sync = ODP_SCHED_SYNC_ATOMIC; + param.sched.group = ODP_SCHED_GROUP_DEFAULT; + + for (j = 0; j < QUEUES_PER_PRIO; j++) { + name[9] = '0' + j/10; + name[10] = '0' + j - 10*(j/10); + + queue = odp_queue_create(name, ODP_QUEUE_TYPE_SCHED, + ¶m); + + if (queue == ODP_QUEUE_INVALID) { + ODP_ERR("Schedule queue create failed.\n"); + return -1; + } + } + } + return 0; +} + +int schedule_test_finalize(void) +{ + odp_term_local(); + odp_term_global(); + return 0; +} + +struct CU_TestInfo schedule_tests[] = { + _CU_TEST_INFO(schedule_wait_time), + _CU_TEST_INFO(schedule_one_single_lo), + _CU_TEST_INFO(schedule_single_lo), + _CU_TEST_INFO(schedule_one_many_lo), + _CU_TEST_INFO(schedule_many_lo), + _CU_TEST_INFO(schedule_multi_lo), + _CU_TEST_INFO(schedule_one_single_hi), + _CU_TEST_INFO(schedule_single_hi), + _CU_TEST_INFO(schedule_one_many_hi), + _CU_TEST_INFO(schedule_many_hi), + _CU_TEST_INFO(schedule_multi_hi), + CU_TEST_INFO_NULL, +}; diff --git a/test/cunit/schedule/odp_schedule_testsuites.c b/test/cunit/schedule/odp_schedule_testsuites.c new file mode 100644 index 0000000..1053069 --- /dev/null +++ b/test/cunit/schedule/odp_schedule_testsuites.c @@ -0,0 +1,35 @@ +/* Copyright (c) 2014, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#include "odp_schedule_testsuites.h" + +static CU_SuiteInfo suites[] = { + { + "Scheduler tests" , + schedule_test_init, + schedule_test_finalize, + NULL, + NULL, + schedule_tests + }, + CU_SUITE_INFO_NULL, +}; + +int main(void) +{ + /* initialize the CUnit test registry */ + if (CUE_SUCCESS != CU_initialize_registry()) + return CU_get_error(); + + /* register suites */ + CU_register_suites(suites); + /* Run all tests using the CUnit Basic interface */ + CU_basic_set_mode(CU_BRM_VERBOSE); + CU_basic_run_tests(); + CU_cleanup_registry(); + + return CU_get_error(); +} diff --git a/test/cunit/schedule/odp_schedule_testsuites.h b/test/cunit/schedule/odp_schedule_testsuites.h new file mode 100644 index 0000000..67a2a69 --- /dev/null +++ b/test/cunit/schedule/odp_schedule_testsuites.h @@ -0,0 +1,21 @@ +/* Copyright (c) 2014, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef ODP_SCHEDULE_TESTSUITES_H_ +#define ODP_SCHEDULE_TESTSUITES_H_ + +#include "odp.h" +#include + +/* Helper macro for CU_TestInfo initialization */ +#define _CU_TEST_INFO(test_func) {#test_func, test_func} + +extern struct CU_TestInfo schedule_tests[]; + +extern int schedule_test_init(void); +extern int schedule_test_finalize(void); + +#endif /* ODP_SCHEDULE_TESTSUITES_H_ */