From patchwork Fri Apr 18 12:56:30 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Uvarov X-Patchwork-Id: 28635 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f70.google.com (mail-yh0-f70.google.com [209.85.213.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 9D52E20548 for ; Fri, 18 Apr 2014 12:56:55 +0000 (UTC) Received: by mail-yh0-f70.google.com with SMTP id c41sf6834633yho.1 for ; Fri, 18 Apr 2014 05:56:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:subject :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:errors-to:sender :x-original-sender:x-original-authentication-results:mailing-list :content-type:content-transfer-encoding; bh=LwyejzWrNRPEZXyVvjLiHbT0UX+NBdQnisphSiRpVuQ=; b=kd5JcJX5Vlr/7L7PnZ4WOw14b7X6q3MKdvwnbbV2rGsnofrOkU9iiEtkOUeBOT8i8v 20PlPDkXOrF56HIg0lxit3E6zirnvRrfO0bH1+sn4mi89+RQmCpXUeKb/tZvLU7aq8nb AOPMlzc0/Ly1opb5199Io3KnD/BoZGSfLSMIcpJE9X/tpUchPMo7x4BjaADp/iI3G0gI upiuTV3F+/1i6eC5zjjIrXFzs+tfLYLbjmod4cO6+bf7tpBZEwcx6XILXOLqVk3F3+t/ l2BmIJS9LeW5Bf8B0RGrhMd6fQpRpgqqwtyKEH/Yb+8+ZH8SO0QqWyA8MyRrmtCde9wN +cTw== X-Gm-Message-State: ALoCoQl75B5Q9bgm+F1r9UexG97fEMHk0/tSLP0yw0VhwmGKVcrWM/9ykMcfDr1KPdBn7qOpBsle X-Received: by 10.58.46.240 with SMTP id y16mr9890817vem.2.1397825815345; Fri, 18 Apr 2014 05:56:55 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.93.194 with SMTP id d60ls1521730qge.86.gmail; Fri, 18 Apr 2014 05:56:55 -0700 (PDT) X-Received: by 10.220.147.16 with SMTP id j16mr14252139vcv.14.1397825815188; Fri, 18 Apr 2014 05:56:55 -0700 (PDT) Received: from mail-ve0-f180.google.com (mail-ve0-f180.google.com [209.85.128.180]) by mx.google.com with ESMTPS id bw7si4946863vcb.159.2014.04.18.05.56.55 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 18 Apr 2014 05:56:55 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.180 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.180; Received: by mail-ve0-f180.google.com with SMTP id jz11so2747188veb.39 for ; Fri, 18 Apr 2014 05:56:55 -0700 (PDT) X-Received: by 10.58.23.6 with SMTP id i6mr17283487vef.12.1397825815048; Fri, 18 Apr 2014 05:56:55 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp110588vcb; Fri, 18 Apr 2014 05:56:54 -0700 (PDT) X-Received: by 10.140.36.179 with SMTP id p48mr24155274qgp.54.1397825814249; Fri, 18 Apr 2014 05:56:54 -0700 (PDT) Received: from ip-10-141-164-156.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id s10si6975696qak.221.2014.04.18.05.56.52 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 18 Apr 2014 05:56:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-141-164-156.ec2.internal) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Wb8LO-0005Bl-IY; Fri, 18 Apr 2014 12:56:38 +0000 Received: from mail-la0-f43.google.com ([209.85.215.43]) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Wb8LG-0005Bg-Jz for lng-odp@lists.linaro.org; Fri, 18 Apr 2014 12:56:31 +0000 Received: by mail-la0-f43.google.com with SMTP id e16so1363127lan.30 for ; Fri, 18 Apr 2014 05:56:37 -0700 (PDT) X-Received: by 10.112.39.97 with SMTP id o1mr1691502lbk.38.1397825797741; Fri, 18 Apr 2014 05:56:37 -0700 (PDT) Received: from maxim-lap.localhost.onion ([92.39.133.154]) by mx.google.com with ESMTPSA id q6sm28118093lal.3.2014.04.18.05.56.35 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 18 Apr 2014 05:56:36 -0700 (PDT) From: Maxim Uvarov To: lng-odp@lists.linaro.org Date: Fri, 18 Apr 2014 16:56:30 +0400 Message-Id: <1397825790-23703-1-git-send-email-maxim.uvarov@linaro.org> X-Mailer: git-send-email 1.8.5.1.163.gd7aced9 Subject: [lng-odp] [ODP/PATCHv2] Scheduler development and timer test X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: maxim.uvarov@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.180 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: Petri Savolainen Added timer test and modified scheduler API for cleaner wait and pause functionality. - Added test/timer, removed timer test code from test/example - Added scheduler wait parameter: cleaner control of wait/no wait/how long to wait - Added scheduler pause/resume which provides application a clean way to break out from the schedule loop (when scheduler has potentially optimized throughput with thread local stash of buffer) - odp_schedule_one which can be used to optimize application RT/QoS vs throughput - queue and time helpers used by scheduler and timer test Signed-off-by: Petri Savolainen --- v2: fixed doxygen from my previous patch (no empty line fixes). Is this patch ok for merging? Best regards, Maxim. include/odp_queue.h | 10 + include/odp_schedule.h | 112 +++++--- include/odp_time.h | 9 + platform/linux-generic/source/odp_queue.c | 9 + platform/linux-generic/source/odp_schedule.c | 106 ++++--- platform/linux-generic/source/odp_time.c | 21 +- test/Makefile | 3 + test/example/odp_example.c | 388 ++++++++++++++++++-------- test/packet/odp_example_pktio.c | 2 +- test/packet_netmap/odp_example_pktio_netmap.c | 2 +- test/timer/Makefile | 46 +++ test/timer/odp_timer_test.c | 335 ++++++++++++++++++++++ 12 files changed, 839 insertions(+), 204 deletions(-) create mode 100644 test/timer/Makefile create mode 100644 test/timer/odp_timer_test.c diff --git a/include/odp_queue.h b/include/odp_queue.h index 24806eb..6401aea 100644 --- a/include/odp_queue.h +++ b/include/odp_queue.h @@ -178,6 +178,16 @@ int odp_queue_deq_multi(odp_queue_t queue, odp_buffer_t buf[], int num); */ odp_queue_type_t odp_queue_type(odp_queue_t queue); +/** + * Queue schedule type + * + * @param queue Queue handle + * + * @return Queue schedule synchronisation type + */ +odp_schedule_sync_t odp_queue_sched_type(odp_queue_t queue); + + #ifdef __cplusplus } #endif diff --git a/include/odp_schedule.h b/include/odp_schedule.h index f146157..8087021 100644 --- a/include/odp_schedule.h +++ b/include/odp_schedule.h @@ -19,94 +19,122 @@ extern "C" { #endif +#include #include #include +#define ODP_SCHED_WAIT 0 /**< Wait infinitely */ +#define ODP_SCHED_NO_WAIT 1 /**< Do not wait */ + + /** - * Schedule once + * Schedule wait time * - * Schedules all queues created with ODP_QUEUE_TYPE_SCHED type. Returns - * next highest priority buffer which is available for the calling thread. - * Outputs the source queue. Returns ODP_BUFFER_INVALID if no buffer - * was available. + * Converts nanoseconds to wait values for other schedule functions. * - * @param from Queue pointer for outputing the queue where the buffer was - * dequeued from. Ignored if NULL. + * @param ns Nanoseconds * - * @return Next highest priority buffer, or ODP_BUFFER_INVALID + * @return Value for the wait parameter in schedule functions */ -odp_buffer_t odp_schedule_once(odp_queue_t *from); +uint64_t odp_schedule_wait_time(uint64_t ns); /** * Schedule * - * Like odp_schedule_once(), but blocks until a buffer is available. + * Schedules all queues created with ODP_QUEUE_TYPE_SCHED type. Returns + * next highest priority buffer which is available for the calling thread. + * Outputs the source queue of the buffer. If there's no buffer available, waits + * for a buffer according to the wait parameter setting. Returns + * ODP_BUFFER_INVALID if reaches end of the wait period. * - * @param from Queue pointer for outputing the queue where the buffer was - * dequeued from. Ignored if NULL. + * @param from Output parameter for the source queue (where the buffer was + * dequeued from). Ignored if NULL. + * @param wait Minimum time to wait for a buffer. Waits infinitely, if set to + * ODP_SCHED_WAIT. Does not wait, if set to ODP_SCHED_NO_WAIT. + * Use odp_schedule_wait_time() to convert time to other wait + * values. * - * @return Next highest priority buffer + * @return Next highest priority buffer, or ODP_BUFFER_INVALID */ -odp_buffer_t odp_schedule(odp_queue_t *from); +odp_buffer_t odp_schedule(odp_queue_t *from, uint64_t wait); /** - * Schedule, non-blocking + * Schedule one buffer + * + * Like odp_schedule(), but is quaranteed to schedule only one buffer at a time. + * Each call will perform global scheduling and will reserve one buffer per + * thread in maximum. When called after other schedule functions, returns + * locally stored buffers (if any) first, and then continues in the global + * scheduling mode. * - * Like odp_schedule(), but returns after 'n' empty schedule rounds. + * This function optimises priority scheduling (over throughput). * - * @param from Queue pointer for outputing the queue where the buffer was - * dequeued from. Ignored if NULL. - * @param n Number of empty schedule rounds before returning - * ODP_BUFFER_INVALID + * User can exit the schedule loop without first calling odp_schedule_pause(). + * + * @param from Output parameter for the source queue (where the buffer was + * dequeued from). Ignored if NULL. + * @param wait Minimum time to wait for a buffer. Waits infinitely, if set to + * ODP_SCHED_WAIT. Does not wait, if set to ODP_SCHED_NO_WAIT. + * Use odp_schedule_wait_time() to convert time to other wait + * values. * * @return Next highest priority buffer, or ODP_BUFFER_INVALID */ -odp_buffer_t odp_schedule_n(odp_queue_t *from, unsigned int n); +odp_buffer_t odp_schedule_one(odp_queue_t *from, uint64_t wait); + /** - * Schedule, multiple buffers + * Schedule multiple buffers * * Like odp_schedule(), but returns multiple buffers from a queue. * - * @param from Queue pointer for outputing the queue where the buffers were - * dequeued from. Ignored if NULL. + * @param from Output parameter for the source queue (where the buffer was + * dequeued from). Ignored if NULL. + * @param wait Minimum time to wait for a buffer. Waits infinitely, if set to + * ODP_SCHED_WAIT. Does not wait, if set to ODP_SCHED_NO_WAIT. + * Use odp_schedule_wait_time() to convert time to other wait + * values. * @param out_buf Buffer array for output * @param num Maximum number of buffers to output * * @return Number of buffers outputed (0 ... num) */ -int odp_schedule_multi(odp_queue_t *from, odp_buffer_t out_buf[], +int odp_schedule_multi(odp_queue_t *from, uint64_t wait, odp_buffer_t out_buf[], unsigned int num); /** - * Schedule, multiple buffers, non-blocking + * Pause scheduling * - * Like odp_schedule_multi(), but returns after 'n' empty schedule rounds. - * - * @param from Queue pointer for outputing the queue where the buffers were - * dequeued from. Ignored if NULL. - * @param out_buf Buffer array for output - * @param num Maximum number of buffers to output - * @param n Number of empty schedule rounds before returning - * ODP_BUFFER_INVALID + * Pause global scheduling for this thread. After this call, all schedule calls + * will return only locally reserved buffers (if any). User can exit the + * schedule loop only after the schedule function indicates that there's no more + * buffers (no more locally reserved buffers). * - * @return Number of buffers outputed (0 ... num) + * Must be used with odp_schedule() and odp_schedule_multi() before exiting (or + * stalling) the schedule loop. */ -int odp_schedule_multi_n(odp_queue_t *from, odp_buffer_t out_buf[], - unsigned int num, unsigned int n); +void odp_schedule_pause(void); /** - * Number of scheduling priorities + * Resume scheduling * - * @return Number of scheduling priorities + * Resume global scheduling for this thread. After this call, all schedule calls + * will schedule normally (perform global scheduling). */ -int odp_schedule_num_prio(void); +void odp_schedule_resume(void); /** * Release currently hold atomic context */ -void odp_schedule_release_atomic_context(void); +void odp_schedule_release_atomic(void); + +/** + * Number of scheduling priorities + * + * @return Number of scheduling priorities + */ +int odp_schedule_num_prio(void); #ifdef __cplusplus @@ -114,5 +142,3 @@ void odp_schedule_release_atomic_context(void); #endif #endif - - diff --git a/include/odp_time.h b/include/odp_time.h index 97da002..d552222 100644 --- a/include/odp_time.h +++ b/include/odp_time.h @@ -51,6 +51,15 @@ uint64_t odp_time_diff_cycles(uint64_t t1, uint64_t t2); uint64_t odp_time_cycles_to_ns(uint64_t cycles); +/** + * Convert nanoseconds to CPU cycles + * + * @param ns Time in nanoseconds + * + * @return Time in CPU cycles + */ +uint64_t odp_time_ns_to_cycles(uint64_t ns); + #ifdef __cplusplus } #endif diff --git a/platform/linux-generic/source/odp_queue.c b/platform/linux-generic/source/odp_queue.c index 49bc766..f2b96a1 100644 --- a/platform/linux-generic/source/odp_queue.c +++ b/platform/linux-generic/source/odp_queue.c @@ -132,6 +132,15 @@ odp_queue_type_t odp_queue_type(odp_queue_t handle) return queue->s.type; } +odp_schedule_sync_t odp_queue_sched_type(odp_queue_t handle) +{ + queue_entry_t *queue; + + queue = queue_to_qentry(handle); + + return queue->s.param.sched.sync; +} + odp_queue_t odp_queue_create(const char *name, odp_queue_type_t type, odp_queue_param_t *param) { diff --git a/platform/linux-generic/source/odp_schedule.c b/platform/linux-generic/source/odp_schedule.c index c3e071a..12f192b 100644 --- a/platform/linux-generic/source/odp_schedule.c +++ b/platform/linux-generic/source/odp_schedule.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include @@ -60,6 +61,7 @@ typedef struct { int num; int index; odp_queue_t queue; + int pause; } sched_local_t; @@ -154,6 +156,7 @@ int odp_schedule_init_local(void) sched_local.num = 0; sched_local.index = 0; sched_local.queue = ODP_QUEUE_INVALID; + sched_local.pause = 0; return 0; } @@ -197,7 +200,7 @@ void odp_schedule_queue(odp_queue_t queue, int prio) } -void odp_schedule_release_atomic_context(void) +void odp_schedule_release_atomic(void) { if (sched_local.pri_queue != ODP_QUEUE_INVALID && sched_local.num == 0) { @@ -223,13 +226,14 @@ static inline int copy_bufs(odp_buffer_t out_buf[], unsigned int max) return i; } + /* * Schedule queues * * TODO: SYNC_ORDERED not implemented yet */ static int schedule(odp_queue_t *out_queue, odp_buffer_t out_buf[], - unsigned int max_num) + unsigned int max_num, unsigned int max_deq) { int i, j; int thr; @@ -244,7 +248,10 @@ static int schedule(odp_queue_t *out_queue, odp_buffer_t out_buf[], return ret; } - odp_schedule_release_atomic_context(); + odp_schedule_release_atomic(); + + if (odp_unlikely(sched_local.pause)) + return 0; thr = odp_thread_id(); @@ -279,7 +286,7 @@ static int schedule(odp_queue_t *out_queue, odp_buffer_t out_buf[], num = odp_queue_deq_multi(queue, sched_local.buf, - MAX_DEQ); + max_deq); if (num == 0) { /* Remove empty queue from scheduling, @@ -320,73 +327,92 @@ static int schedule(odp_queue_t *out_queue, odp_buffer_t out_buf[], } -odp_buffer_t odp_schedule_once(odp_queue_t *out_queue) +static int schedule_loop(odp_queue_t *out_queue, uint64_t wait, + odp_buffer_t out_buf[], + unsigned int max_num, unsigned int max_deq) { - odp_buffer_t buf = ODP_BUFFER_INVALID; + uint64_t start_cycle, cycle, diff; + int ret; - schedule(out_queue, &buf, 1); + start_cycle = 0; - return buf; + while (1) { + ret = schedule(out_queue, out_buf, max_num, max_deq); + + if (ret) + break; + + if (wait == ODP_SCHED_WAIT) + continue; + + if (wait == ODP_SCHED_NO_WAIT) + break; + + if (start_cycle == 0) { + start_cycle = odp_time_get_cycles(); + continue; + } + + cycle = odp_time_get_cycles(); + diff = odp_time_diff_cycles(start_cycle, cycle); + + if (wait < diff) + break; + } + + return ret; } -odp_buffer_t odp_schedule(odp_queue_t *out_queue) +odp_buffer_t odp_schedule(odp_queue_t *out_queue, uint64_t wait) { odp_buffer_t buf; - int ret; - while (1) { - ret = schedule(out_queue, &buf, 1); + buf = ODP_BUFFER_INVALID; - if (ret) - return buf; - } + schedule_loop(out_queue, wait, &buf, 1, MAX_DEQ); + + return buf; } -odp_buffer_t odp_schedule_n(odp_queue_t *out_queue, unsigned int n) +odp_buffer_t odp_schedule_one(odp_queue_t *out_queue, uint64_t wait) { odp_buffer_t buf; - int ret; - while (n--) { - ret = schedule(out_queue, &buf, 1); + buf = ODP_BUFFER_INVALID; - if (ret) - return buf; - } + schedule_loop(out_queue, wait, &buf, 1, 1); - return ODP_BUFFER_INVALID; + return buf; } -int odp_schedule_multi(odp_queue_t *out_queue, odp_buffer_t out_buf[], - unsigned int num) +int odp_schedule_multi(odp_queue_t *out_queue, uint64_t wait, + odp_buffer_t out_buf[], unsigned int num) { - int ret; + return schedule_loop(out_queue, wait, out_buf, num, MAX_DEQ); +} - while (1) { - ret = schedule(out_queue, out_buf, num); - if (ret) - return ret; - } +void odp_schedule_pause(void) +{ + sched_local.pause = 1; } -int odp_schedule_multi_n(odp_queue_t *out_queue, odp_buffer_t out_buf[], - unsigned int num, unsigned int n) +void odp_schedule_resume(void) { - int ret; + sched_local.pause = 0; +} - while (n--) { - ret = schedule(out_queue, out_buf, num); - if (ret) - return ret; - } +uint64_t odp_schedule_wait_time(uint64_t ns) +{ + if (ns <= ODP_SCHED_NO_WAIT) + ns = ODP_SCHED_NO_WAIT + 1; - return 0; + return odp_time_ns_to_cycles(ns); } diff --git a/platform/linux-generic/source/odp_time.c b/platform/linux-generic/source/odp_time.c index 23ff8f5..4f5e507 100644 --- a/platform/linux-generic/source/odp_time.c +++ b/platform/linux-generic/source/odp_time.c @@ -9,6 +9,8 @@ #include #include +#define GIGA 1000000000 + #if defined __x86_64__ || defined __i386__ uint64_t odp_time_get_cycles(void) @@ -66,7 +68,7 @@ uint64_t odp_time_get_cycles(void) ns = (uint64_t) time.tv_nsec; cycles = sec * hz; - cycles += (ns * hz) / 1000000000; + cycles += (ns * hz) / GIGA; return cycles; } @@ -85,8 +87,19 @@ uint64_t odp_time_cycles_to_ns(uint64_t cycles) { uint64_t hz = odp_sys_cpu_hz(); - if (cycles > (UINT64_MAX / 1000000000)) - return 1000000000*(cycles/hz); + if (cycles > (UINT64_MAX / GIGA)) + return (cycles/hz)*GIGA; + + return (cycles*GIGA)/hz; +} + + +uint64_t odp_time_ns_to_cycles(uint64_t ns) +{ + uint64_t hz = odp_sys_cpu_hz(); + + if (ns > (UINT64_MAX / hz)) + return (ns/GIGA)*hz; - return (1000000000*cycles)/hz; + return (ns*hz)/GIGA; } diff --git a/test/Makefile b/test/Makefile index 2ff7a4c..9e3c482 100644 --- a/test/Makefile +++ b/test/Makefile @@ -9,6 +9,7 @@ all: $(MAKE) -C example $(MAKE) -C packet $(MAKE) -C packet_netmap + $(MAKE) -C timer .PHONY: clean clean: @@ -16,6 +17,7 @@ clean: $(MAKE) -C example clean $(MAKE) -C packet clean $(MAKE) -C packet_netmap clean + $(MAKE) -C timer clean .PHONY: install install: @@ -23,3 +25,4 @@ install: $(MAKE) -C example install $(MAKE) -C packet install $(MAKE) -C packet_netmap install + $(MAKE) -C timer install diff --git a/test/example/odp_example.c b/test/example/odp_example.c index d676bf7..be96093 100644 --- a/test/example/odp_example.c +++ b/test/example/odp_example.c @@ -33,7 +33,6 @@ #define QUEUE_ROUNDS (512*1024) /**< Queue test rounds */ #define ALLOC_ROUNDS (1024*1024) /**< Alloc test rounds */ #define MULTI_BUFS_MAX 4 /**< Buffer burst size */ -#define SCHED_RETRY 100 /**< Schedule retries */ #define TEST_SEC 2 /**< Time test duration in sec */ /** Dummy message */ @@ -55,11 +54,6 @@ typedef struct { static odp_barrier_t test_barrier; -/* #define TEST_TIMEOUTS */ -#ifdef TEST_TIMEOUTS -static odp_timer_t test_timer; -#endif - /** * @internal Clear all scheduled queues. Retry to be sure that all * buffers have been scheduled. @@ -69,7 +63,7 @@ static void clear_sched_queues(void) odp_buffer_t buf; while (1) { - buf = odp_schedule_n(NULL, SCHED_RETRY); + buf = odp_schedule(NULL, ODP_SCHED_NO_WAIT); if (buf == ODP_BUFFER_INVALID) break; @@ -78,48 +72,76 @@ static void clear_sched_queues(void) } } -#ifdef TEST_TIMEOUTS -static void test_timeouts(int thr) + +static int create_queue(int thr, odp_buffer_pool_t msg_pool, int prio) { - uint64_t tick; - odp_queue_t queue; + char name[] = "sched_XX_00"; odp_buffer_t buf; - int num = 10; + odp_queue_t queue; - ODP_DBG(" [%i] test_timeouts\n", thr); + buf = odp_buffer_alloc(msg_pool); - queue = odp_queue_lookup("timer_queue"); + if (!odp_buffer_is_valid(buf)) { + ODP_ERR(" [%i] msg_pool alloc failed\n", thr); + return -1; + } - tick = odp_timer_current_tick(test_timer); + name[6] = '0' + prio/10; + name[7] = '0' + prio - 10*(prio/10); - ODP_DBG(" [%i] current tick %"PRIu64"\n", thr, tick); + queue = odp_queue_lookup(name); - tick += 100; + if (queue == ODP_QUEUE_INVALID) { + ODP_ERR(" [%i] Queue %s lookup failed.\n", thr, name); + return -1; + } - odp_timer_absolute_tmo(test_timer, tick, - queue, ODP_BUFFER_INVALID); + if (odp_queue_enq(queue, buf)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + return 0; +} - while (1) { - while ((buf = odp_queue_deq(queue) == ODP_BUFFER_INVALID)) - ; +static int create_queues(int thr, odp_buffer_pool_t msg_pool, int prio) +{ + char name[] = "sched_XX_YY"; + odp_buffer_t buf; + odp_queue_t queue; + int i; - /* ODP_DBG(" [%i] timeout\n", thr); */ + name[6] = '0' + prio/10; + name[7] = '0' + prio - 10*(prio/10); - odp_buffer_free(buf); + /* Alloc and enqueue a buffer per queue */ + for (i = 0; i < QUEUES_PER_PRIO; i++) { + name[9] = '0' + i/10; + name[10] = '0' + i - 10*(i/10); - num--; + queue = odp_queue_lookup(name); - if (num == 0) - break; + if (queue == ODP_QUEUE_INVALID) { + ODP_ERR(" [%i] Queue %s lookup failed.\n", thr, name); + return -1; + } + + buf = odp_buffer_alloc(msg_pool); - tick = odp_timer_current_tick(test_timer) + 100; + if (!odp_buffer_is_valid(buf)) { + ODP_ERR(" [%i] msg_pool alloc failed\n", thr); + return -1; + } - odp_timer_absolute_tmo(test_timer, tick, - queue, ODP_BUFFER_INVALID); + if (odp_queue_enq(queue, buf)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } } + + return 0; } -#endif + /** * @internal Test single buffer alloc and free @@ -152,7 +174,7 @@ static int test_alloc_single(int thr, odp_buffer_pool_t pool) cycles = odp_time_diff_cycles(t1, t2); ns = odp_time_cycles_to_ns(cycles); - printf(" [%i] alloc_sng alloc+free %"PRIu64" cycles, %"PRIu64" ns\n", + printf(" [%i] alloc_sng alloc+free %"PRIu64" cycles, %"PRIu64" ns\n", thr, cycles/ALLOC_ROUNDS, ns/ALLOC_ROUNDS); return 0; @@ -258,7 +280,7 @@ static int test_poll_queue(int thr, odp_buffer_pool_t msg_pool) cycles = odp_time_diff_cycles(t1, t2); ns = odp_time_cycles_to_ns(cycles); - printf(" [%i] poll_queue enq+deq %"PRIu64" cycles, %"PRIu64" ns\n", + printf(" [%i] poll_queue enq+deq %"PRIu64" cycles, %"PRIu64" ns\n", thr, cycles/QUEUE_ROUNDS, ns/QUEUE_ROUNDS); odp_buffer_free(buf); @@ -266,7 +288,7 @@ static int test_poll_queue(int thr, odp_buffer_pool_t msg_pool) } /** - * @internal Test scheduling of a single queue + * @internal Test scheduling of a single queue - with odp_schedule_one() * * Enqueue a buffer to the shared queue. Schedule and enqueue the received * buffer back into the queue. @@ -278,47 +300,22 @@ static int test_poll_queue(int thr, odp_buffer_pool_t msg_pool) * * @return 0 if successful */ -static int test_sched_single_queue(const char *str, int thr, - odp_buffer_pool_t msg_pool, int prio) +static int test_schedule_one_single(const char *str, int thr, + odp_buffer_pool_t msg_pool, int prio) { odp_buffer_t buf; odp_queue_t queue; uint64_t t1, t2, cycles, ns; uint32_t i; uint32_t tot = 0; - char name[] = "sched_XX_00"; - - buf = odp_buffer_alloc(msg_pool); - - if (!odp_buffer_is_valid(buf)) { - ODP_ERR(" [%i] msg_pool alloc failed\n", thr); - return -1; - } - - name[6] = '0' + prio/10; - name[7] = '0' + prio - 10*(prio/10); - queue = odp_queue_lookup(name); - - if (queue == ODP_QUEUE_INVALID) { - ODP_ERR(" [%i] Queue %s lookup failed.\n", thr, name); + if (create_queue(thr, msg_pool, prio)) return -1; - } - - /* printf(" [%i] prio %i queue %s\n", thr, prio, name); */ - - if (odp_queue_enq(queue, buf)) { - ODP_ERR(" [%i] Queue enqueue failed.\n", thr); - return -1; - } t1 = odp_time_get_cycles(); for (i = 0; i < QUEUE_ROUNDS; i++) { - buf = odp_schedule_n(NULL, SCHED_RETRY); - - if (buf == ODP_BUFFER_INVALID) - break; + buf = odp_schedule_one(&queue, ODP_SCHED_WAIT); if (odp_queue_enq(queue, buf)) { ODP_ERR(" [%i] Queue enqueue failed.\n", thr); @@ -326,6 +323,9 @@ static int test_sched_single_queue(const char *str, int thr, } } + if (odp_queue_sched_type(queue) == ODP_SCHED_SYNC_ATOMIC) + odp_schedule_release_atomic(); + t2 = odp_time_get_cycles(); cycles = odp_time_diff_cycles(t1, t2); ns = odp_time_cycles_to_ns(cycles); @@ -349,7 +349,7 @@ static int test_sched_single_queue(const char *str, int thr, } /** - * @internal Test scheduling of multiple queues + * @internal Test scheduling of multiple queues - with odp_schedule_one() * * Enqueue a buffer to each queue. Schedule and enqueue the received * buffer back into the queue it came from. @@ -361,7 +361,7 @@ static int test_sched_single_queue(const char *str, int thr, * * @return 0 if successful */ -static int test_sched_multi_queue(const char *str, int thr, +static int test_schedule_one_many(const char *str, int thr, odp_buffer_pool_t msg_pool, int prio) { odp_buffer_t buf; @@ -371,29 +371,95 @@ static int test_sched_multi_queue(const char *str, int thr, uint64_t cycles, ns; uint32_t i; uint32_t tot = 0; - char name[] = "sched_XX_YY"; - name[6] = '0' + prio/10; - name[7] = '0' + prio - 10*(prio/10); + if (create_queues(thr, msg_pool, prio)) + return -1; - /* Alloc and enqueue a buffer per queue */ - for (i = 0; i < QUEUES_PER_PRIO; i++) { - name[9] = '0' + i/10; - name[10] = '0' + i - 10*(i/10); + /* Start sched-enq loop */ + t1 = odp_time_get_cycles(); - queue = odp_queue_lookup(name); + for (i = 0; i < QUEUE_ROUNDS; i++) { + buf = odp_schedule_one(&queue, ODP_SCHED_WAIT); - if (queue == ODP_QUEUE_INVALID) { - ODP_ERR(" [%i] Queue %s lookup failed.\n", thr, name); + if (odp_queue_enq(queue, buf)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); return -1; } + } - buf = odp_buffer_alloc(msg_pool); + if (odp_queue_sched_type(queue) == ODP_SCHED_SYNC_ATOMIC) + odp_schedule_release_atomic(); - if (!odp_buffer_is_valid(buf)) { - ODP_ERR(" [%i] msg_pool alloc failed\n", thr); + t2 = odp_time_get_cycles(); + cycles = odp_time_diff_cycles(t1, t2); + ns = odp_time_cycles_to_ns(cycles); + tot = i; + + odp_barrier_sync(&test_barrier); + clear_sched_queues(); + + if (tot) { + cycles = cycles/tot; + ns = ns/tot; + } else { + cycles = 0; + ns = 0; + } + + printf(" [%i] %s enq+deq %"PRIu64" cycles, %"PRIu64" ns\n", + thr, str, cycles, ns); + + return 0; +} + +/** + * @internal Test scheduling of a single queue - with odp_schedule() + * + * Enqueue a buffer to the shared queue. Schedule and enqueue the received + * buffer back into the queue. + * + * @param str Test case name string + * @param thr Thread + * @param msg_pool Buffer pool + * @param prio Priority + * + * @return 0 if successful + */ +static int test_schedule_single(const char *str, int thr, + odp_buffer_pool_t msg_pool, int prio) +{ + odp_buffer_t buf; + odp_queue_t queue; + uint64_t t1, t2, cycles, ns; + uint32_t i; + uint32_t tot = 0; + + if (create_queue(thr, msg_pool, prio)) + return -1; + + t1 = odp_time_get_cycles(); + + for (i = 0; i < QUEUE_ROUNDS; i++) { + buf = odp_schedule(&queue, ODP_SCHED_WAIT); + + if (odp_queue_enq(queue, buf)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); return -1; } + } + + /* Clear possible locally stored buffers */ + odp_schedule_pause(); + + tot = i; + + while (1) { + buf = odp_schedule(&queue, ODP_SCHED_NO_WAIT); + + if (buf == ODP_BUFFER_INVALID) + break; + + tot++; if (odp_queue_enq(queue, buf)) { ODP_ERR(" [%i] Queue enqueue failed.\n", thr); @@ -401,25 +467,93 @@ static int test_sched_multi_queue(const char *str, int thr, } } + odp_schedule_resume(); + + t2 = odp_time_get_cycles(); + cycles = odp_time_diff_cycles(t1, t2); + ns = odp_time_cycles_to_ns(cycles); + + odp_barrier_sync(&test_barrier); + clear_sched_queues(); + + if (tot) { + cycles = cycles/tot; + ns = ns/tot; + } else { + cycles = 0; + ns = 0; + } + + printf(" [%i] %s enq+deq %"PRIu64" cycles, %"PRIu64" ns\n", + thr, str, cycles, ns); + + return 0; +} + + +/** + * @internal Test scheduling of multiple queues - with odp_schedule() + * + * Enqueue a buffer to each queue. Schedule and enqueue the received + * buffer back into the queue it came from. + * + * @param str Test case name string + * @param thr Thread + * @param msg_pool Buffer pool + * @param prio Priority + * + * @return 0 if successful + */ +static int test_schedule_many(const char *str, int thr, + odp_buffer_pool_t msg_pool, int prio) +{ + odp_buffer_t buf; + odp_queue_t queue; + uint64_t t1 = 0; + uint64_t t2 = 0; + uint64_t cycles, ns; + uint32_t i; + uint32_t tot = 0; + + if (create_queues(thr, msg_pool, prio)) + return -1; + /* Start sched-enq loop */ t1 = odp_time_get_cycles(); for (i = 0; i < QUEUE_ROUNDS; i++) { - buf = odp_schedule_n(&queue, SCHED_RETRY); + buf = odp_schedule(&queue, ODP_SCHED_WAIT); + + if (odp_queue_enq(queue, buf)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + } + + /* Clear possible locally stored buffers */ + odp_schedule_pause(); + + tot = i; + + while (1) { + buf = odp_schedule(&queue, ODP_SCHED_NO_WAIT); if (buf == ODP_BUFFER_INVALID) break; + tot++; + if (odp_queue_enq(queue, buf)) { ODP_ERR(" [%i] Queue enqueue failed.\n", thr); return -1; } } + odp_schedule_resume(); + t2 = odp_time_get_cycles(); cycles = odp_time_diff_cycles(t1, t2); ns = odp_time_cycles_to_ns(cycles); - tot = i; odp_barrier_sync(&test_barrier); clear_sched_queues(); @@ -448,8 +582,8 @@ static int test_sched_multi_queue(const char *str, int thr, * * @return 0 if successful */ -static int test_sched_multi_queue_m(const char *str, int thr, - odp_buffer_pool_t msg_pool, int prio) +static int test_schedule_multi(const char *str, int thr, + odp_buffer_pool_t msg_pool, int prio) { odp_buffer_t buf[MULTI_BUFS_MAX]; odp_queue_t queue; @@ -457,6 +591,7 @@ static int test_sched_multi_queue_m(const char *str, int thr, uint64_t t2 = 0; uint64_t cycles, ns; int i, j; + int num; uint32_t tot = 0; char name[] = "sched_XX_YY"; @@ -494,10 +629,23 @@ static int test_sched_multi_queue_m(const char *str, int thr, t1 = odp_time_get_cycles(); for (i = 0; i < QUEUE_ROUNDS; i++) { - int num; + num = odp_schedule_multi(&queue, ODP_SCHED_WAIT, buf, + MULTI_BUFS_MAX); + + tot += num; - num = odp_schedule_multi_n(&queue, buf, - MULTI_BUFS_MAX, SCHED_RETRY); + if (odp_queue_enq_multi(queue, buf, num)) { + ODP_ERR(" [%i] Queue enqueue failed.\n", thr); + return -1; + } + } + + /* Clear possible locally stored buffers */ + odp_schedule_pause(); + + while (1) { + num = odp_schedule_multi(&queue, ODP_SCHED_NO_WAIT, buf, + MULTI_BUFS_MAX); if (num == 0) break; @@ -510,6 +658,9 @@ static int test_sched_multi_queue_m(const char *str, int thr, } } + odp_schedule_resume(); + + t2 = odp_time_get_cycles(); cycles = odp_time_diff_cycles(t1, t2); ns = odp_time_cycles_to_ns(cycles); @@ -580,47 +731,70 @@ static void *run_thread(void *arg) if (test_poll_queue(thr, msg_pool)) return NULL; + /* Low prio */ + odp_barrier_sync(&test_barrier); - if (test_sched_single_queue("sched_single_hi", thr, msg_pool, - ODP_SCHED_PRIO_HIGHEST)) + if (test_schedule_one_single("sched_one_s_lo", thr, msg_pool, + ODP_SCHED_PRIO_LOWEST)) return NULL; odp_barrier_sync(&test_barrier); - if (test_sched_single_queue("sched_single_lo", thr, msg_pool, - ODP_SCHED_PRIO_LOWEST)) + if (test_schedule_single("sched_____s_lo", thr, msg_pool, + ODP_SCHED_PRIO_LOWEST)) return NULL; odp_barrier_sync(&test_barrier); - if (test_sched_multi_queue("sched_multi_hi", thr, msg_pool, - ODP_SCHED_PRIO_HIGHEST)) + if (test_schedule_one_many("sched_one_m_lo", thr, msg_pool, + ODP_SCHED_PRIO_LOWEST)) return NULL; odp_barrier_sync(&test_barrier); - if (test_sched_multi_queue("sched_multi_lo", thr, msg_pool, - ODP_SCHED_PRIO_LOWEST)) + if (test_schedule_many("sched_____m_lo", thr, msg_pool, + ODP_SCHED_PRIO_LOWEST)) return NULL; odp_barrier_sync(&test_barrier); - if (test_sched_multi_queue_m("sched_multi_hi_m", thr, msg_pool, + if (test_schedule_multi("sched_multi_lo", thr, msg_pool, + ODP_SCHED_PRIO_LOWEST)) + return NULL; + + /* High prio */ + + odp_barrier_sync(&test_barrier); + + if (test_schedule_one_single("sched_one_s_hi", thr, msg_pool, ODP_SCHED_PRIO_HIGHEST)) return NULL; odp_barrier_sync(&test_barrier); - if (test_sched_multi_queue_m("sched_multi_lo_m", thr, msg_pool, - ODP_SCHED_PRIO_LOWEST)) + if (test_schedule_single("sched_____s_hi", thr, msg_pool, + ODP_SCHED_PRIO_HIGHEST)) return NULL; -#ifdef TEST_TIMEOUTS odp_barrier_sync(&test_barrier); - test_timeouts(thr); -#endif + if (test_schedule_one_many("sched_one_m_hi", thr, msg_pool, + ODP_SCHED_PRIO_HIGHEST)) + return NULL; + + odp_barrier_sync(&test_barrier); + + if (test_schedule_many("sched_____m_hi", thr, msg_pool, + ODP_SCHED_PRIO_HIGHEST)) + return NULL; + + odp_barrier_sync(&test_barrier); + + if (test_schedule_multi("sched_multi_hi", thr, msg_pool, + ODP_SCHED_PRIO_HIGHEST)) + return NULL; + printf("Thread %i exits\n", thr); fflush(NULL); @@ -836,22 +1010,6 @@ int main(int argc, char *argv[]) return -1; } - -#ifdef TEST_TIMEOUTS - /* - * Create a queue for timer test - */ - queue = odp_queue_create("timer_queue", ODP_QUEUE_TYPE_SCHED, NULL); - - if (queue == ODP_QUEUE_INVALID) { - ODP_ERR("Timer queue create failed.\n"); - return -1; - } - - test_timer = odp_timer_create("test_timer", pool, - 1000000, 1000000, 1000000000000); -#endif - /* * Create queues for schedule test. QUEUES_PER_PRIO per priority. */ diff --git a/test/packet/odp_example_pktio.c b/test/packet/odp_example_pktio.c index 8a13013..3acb1fb 100644 --- a/test/packet/odp_example_pktio.c +++ b/test/packet/odp_example_pktio.c @@ -155,7 +155,7 @@ static void *pktio_queue_thread(void *arg) #if 1 /* Use schedule to get buf from any input queue */ - buf = odp_schedule(NULL); + buf = odp_schedule(NULL, ODP_SCHED_WAIT); #else /* Always dequeue from the same input queue */ buf = odp_queue_deq(inq_def); diff --git a/test/packet_netmap/odp_example_pktio_netmap.c b/test/packet_netmap/odp_example_pktio_netmap.c index 283abe4..f50f764 100644 --- a/test/packet_netmap/odp_example_pktio_netmap.c +++ b/test/packet_netmap/odp_example_pktio_netmap.c @@ -133,7 +133,7 @@ static void *pktio_queue_thread(void *arg) pktio_info_t *pktio_info; /* Use schedule to get buf from any input queue */ - buf = odp_schedule(NULL); + buf = odp_schedule(NULL, ODP_SCHED_WAIT); pkt = odp_packet_from_buffer(buf); diff --git a/test/timer/Makefile b/test/timer/Makefile new file mode 100644 index 0000000..cefea23 --- /dev/null +++ b/test/timer/Makefile @@ -0,0 +1,46 @@ +# Copyright (c) 2013, Linaro Limited +# All rights reserved. +# +# SPDX-License-Identifier: BSD-3-Clause + +ODP_ROOT = ../.. +ODP_APP = odp_timer_test + +include $(ODP_ROOT)/Makefile.inc +include ../Makefile.inc + +.PHONY: default +default: $(OBJ_DIR) $(ODP_APP) + +OBJS = +OBJS += $(OBJ_DIR)/odp_timer_test.o + +DEPS = $(OBJS:.o=.d) + +-include $(DEPS) + + +# +# Compile rules +# +$(OBJ_DIR)/%.o: %.c + $(ECHO) Compiling $< + $(CC) -c -MD $(EXTRA_CFLAGS) $(CFLAGS) -o $@ $< + +# +# Link rule +# +$(ODP_APP): $(ODP_LIB) $(OBJS) + $(ECHO) Linking $@ + $(CC) $(LDFLAGS) $(OBJS) $(ODP_LIB) $(STD_LIBS) -o $@ + +.PHONY: clean +clean: + $(RMDIR) $(OBJ_DIR) + $(RM) $(ODP_APP) + $(MAKE) -C $(ODP_DIR) clean + +.PHONY: install +install: + install -d $(DESTDIR)/share/odp + install -m 0755 $(ODP_APP) $(DESTDIR)/share/odp/ diff --git a/test/timer/odp_timer_test.c b/test/timer/odp_timer_test.c new file mode 100644 index 0000000..341265d --- /dev/null +++ b/test/timer/odp_timer_test.c @@ -0,0 +1,335 @@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @file + * + * @example odp_timer_test.c ODP timer example application + */ + +#include +#include + +/* ODP main header */ +#include + +/* ODP helper for Linux apps */ +#include + +/* GNU lib C */ +#include + + +#define MAX_WORKERS 32 /**< Max worker threads */ +#define MSG_POOL_SIZE (4*1024*1024) /**< Message pool size */ + +/** Dummy message */ +typedef struct { + int msg_id; /**< Message ID */ + int seq; /**< Sequence number */ +} test_message_t; + +#define MSG_HELLO 1 /**< Hello */ +#define MSG_ACK 2 /**< Ack */ + +/** Test arguments */ +typedef struct { + int core_count; /**< Core count*/ +} test_args_t; + + +/** @private Barrier for test synchronisation */ +static odp_barrier_t test_barrier; + +/** @private Timer handle*/ +static odp_timer_t test_timer; + + +/** @private test timeout */ +static void test_timeouts(int thr) +{ + uint64_t tick; + odp_queue_t queue; + odp_buffer_t buf; + int num = 10; + + ODP_DBG(" [%i] test_timeouts\n", thr); + + queue = odp_queue_lookup("timer_queue"); + + tick = odp_timer_current_tick(test_timer); + + tick += 100; + + odp_timer_absolute_tmo(test_timer, tick, + queue, ODP_BUFFER_INVALID); + + ODP_DBG(" [%i] current tick %"PRIu64"\n", thr, tick); + + while (1) { + buf = odp_schedule_one(&queue, ODP_SCHED_WAIT); + + /* TODO: read tick from tmo metadata */ + tick = odp_timer_current_tick(test_timer); + + ODP_DBG(" [%i] timeout, tick %"PRIu64"\n", thr, tick); + + odp_buffer_free(buf); + + num--; + + if (num == 0) + break; + + tick += 100; + + odp_timer_absolute_tmo(test_timer, tick, + queue, ODP_BUFFER_INVALID); + } + + if (odp_queue_sched_type(queue) == ODP_SCHED_SYNC_ATOMIC) + odp_schedule_release_atomic(); +} + + +/** + * @internal Worker thread + * + * @param arg Arguments + * + * @return NULL on failure + */ +static void *run_thread(void *arg) +{ + int thr; + odp_buffer_pool_t msg_pool; + + thr = odp_thread_id(); + + printf("Thread %i starts on core %i\n", thr, odp_thread_core()); + + /* + * Test barriers back-to-back + */ + odp_barrier_sync(&test_barrier); + odp_barrier_sync(&test_barrier); + odp_barrier_sync(&test_barrier); + odp_barrier_sync(&test_barrier); + + /* + * Find the buffer pool + */ + msg_pool = odp_buffer_pool_lookup("msg_pool"); + + if (msg_pool == ODP_BUFFER_POOL_INVALID) { + ODP_ERR(" [%i] msg_pool not found\n", thr); + return NULL; + } + + odp_barrier_sync(&test_barrier); + + test_timeouts(thr); + + + printf("Thread %i exits\n", thr); + fflush(NULL); + return arg; +} + + +/** + * @internal Print help + */ +static void print_usage(void) +{ + printf("\n\nUsage: ./odp_example [options]\n"); + printf("Options:\n"); + printf(" -c, --count core count, core IDs start from 1\n"); + printf(" -h, --help this help\n"); + printf("\n\n"); +} + + +/** + * @internal Parse arguments + * + * @param argc Argument count + * @param argv Argument vector + * @param args Test arguments + */ +static void parse_args(int argc, char *argv[], test_args_t *args) +{ + int opt; + int long_index; + + static struct option longopts[] = { + {"count", required_argument, NULL, 'c'}, + {"help", no_argument, NULL, 'h'}, + {NULL, 0, NULL, 0} + }; + + while (1) { + opt = getopt_long(argc, argv, "+c:h", longopts, &long_index); + + if (opt == -1) + break; /* No more options */ + + switch (opt) { + case 'c': + args->core_count = atoi(optarg); + break; + + case 'h': + print_usage(); + exit(EXIT_SUCCESS); + break; + + default: + break; + } + } +} + + +/** + * Test main function + */ +int main(int argc, char *argv[]) +{ + odp_linux_pthread_t thread_tbl[MAX_WORKERS]; + test_args_t args; + int thr_id; + int num_workers; + odp_buffer_pool_t pool; + void *pool_base; + odp_queue_t queue; + int first_core; + uint64_t cycles, ns; + odp_queue_param_t param; + + printf("\nODP example starts\n"); + + memset(&args, 0, sizeof(args)); + parse_args(argc, argv, &args); + + memset(thread_tbl, 0, sizeof(thread_tbl)); + + if (odp_init_global()) { + printf("ODP global init failed.\n"); + return -1; + } + + printf("\n"); + printf("ODP system info\n"); + printf("---------------\n"); + printf("ODP API version: %s\n", odp_version_api_str()); + printf("CPU model: %s\n", odp_sys_cpu_model_str()); + printf("CPU freq (hz): %"PRIu64"\n", odp_sys_cpu_hz()); + printf("Cache line size: %i\n", odp_sys_cache_line_size()); + printf("Max core count: %i\n", odp_sys_core_count()); + + printf("\n"); + + /* A worker thread per core */ + num_workers = odp_sys_core_count(); + + if (args.core_count) + num_workers = args.core_count; + + /* force to max core count */ + if (num_workers > MAX_WORKERS) + num_workers = MAX_WORKERS; + + printf("num worker threads: %i\n", num_workers); + + /* + * By default core #0 runs Linux kernel background tasks. + * Start mapping thread from core #1 + */ + first_core = 1; + + if (odp_sys_core_count() == 1) + first_core = 0; + + printf("first core: %i\n", first_core); + + /* + * Init this thread. It makes also ODP calls when + * setting up resources for worker threads. + */ + thr_id = odp_thread_create(0); + odp_init_local(thr_id); + + /* + * Create message pool + */ + pool_base = odp_shm_reserve("msg_pool", + MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE); + + pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, + sizeof(test_message_t), + ODP_CACHE_LINE_SIZE, ODP_BUFFER_TYPE_RAW); + + if (pool == ODP_BUFFER_POOL_INVALID) { + ODP_ERR("Pool create failed.\n"); + return -1; + } + + /* + * Create a queue for timer test + */ + memset(¶m, 0, sizeof(param)); + param.sched.prio = ODP_SCHED_PRIO_DEFAULT; + param.sched.sync = ODP_SCHED_SYNC_NONE; + param.sched.group = ODP_SCHED_GROUP_DEFAULT; + + queue = odp_queue_create("timer_queue", ODP_QUEUE_TYPE_SCHED, ¶m); + + if (queue == ODP_QUEUE_INVALID) { + ODP_ERR("Timer queue create failed.\n"); + return -1; + } + + test_timer = odp_timer_create("test_timer", pool, + 1000000, 1000000, 1000000000000); + + + odp_shm_print_all(); + + printf("CPU freq %"PRIu64" hz\n", odp_sys_cpu_hz()); + printf("Cycles vs nanoseconds:\n"); + ns = 0; + cycles = odp_time_ns_to_cycles(ns); + + printf(" %12"PRIu64" ns -> %12"PRIu64" cycles\n", ns, cycles); + printf(" %12"PRIu64" cycles -> %12"PRIu64" ns\n", cycles, + odp_time_cycles_to_ns(cycles)); + + for (ns = 1; ns <= 100000000000; ns *= 10) { + cycles = odp_time_ns_to_cycles(ns); + + printf(" %12"PRIu64" ns -> %12"PRIu64" cycles\n", ns, + cycles); + printf(" %12"PRIu64" cycles -> %12"PRIu64" ns\n", cycles, + odp_time_cycles_to_ns(cycles)); + } + + printf("\n"); + + /* Barrier to sync test case execution */ + odp_barrier_init_count(&test_barrier, num_workers); + + /* Create and launch worker threads */ + odp_linux_pthread_create(thread_tbl, num_workers, first_core, + run_thread, NULL); + + /* Wait for worker threads to exit */ + odp_linux_pthread_join(thread_tbl, num_workers); + + printf("ODP timer test complete\n\n"); + + return 0; +}