From patchwork Thu Jan 8 21:35:22 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ola Liljedahl X-Patchwork-Id: 42893 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f69.google.com (mail-la0-f69.google.com [209.85.215.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id EB7642055F for ; Thu, 8 Jan 2015 21:35:44 +0000 (UTC) Received: by mail-la0-f69.google.com with SMTP id gd6sf6178534lab.0 for ; Thu, 08 Jan 2015 13:35:43 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:subject:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=K9asvONjPWxqnNP2Ipyo190fxPLM7GDduRh79RDR6yw=; b=LfRMDRz8OXg5JPfvmpxHwoy0QQdrL/RL9YFA8qI0QLbOvnKwzqp/MDM8ahaeonJDCP 6FhOTiq0v7f0AGeq1cfVe34tRnXGAp5Cb1IMvGmPQy9ch4adw0UKbJPmKyonv8upucL/ TwIFPRMU+J7vdgDPYxV0Zk/WbXhGPNACATpEv8y/HD0qh+4cE8Y65TmcEl0fNLxlWoAP /+vFKvL+hVVh8D1acrD/2th1i+q6NNmZnNBlKjyOiwJ34y0aHZf9PSivuY3erVYG1HrD dYxf13VAOHS/EaMHQCb7drAae2EwTJbRzPrbqzNpMkX1hH0nXL5CJckeqqoVACItbRbr zVsw== X-Gm-Message-State: ALoCoQmVQCU1V4ohzszE/cmv+ZMBQi5wWvn9loSyzsqawBxq6hmAo/tLAZ9LfbiqIyxJiIV2EIW1 X-Received: by 10.112.171.74 with SMTP id as10mr615038lbc.8.1420752943561; Thu, 08 Jan 2015 13:35:43 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.30.9 with SMTP id o9ls384196lah.29.gmail; Thu, 08 Jan 2015 13:35:43 -0800 (PST) X-Received: by 10.112.61.231 with SMTP id t7mr17876741lbr.60.1420752943304; Thu, 08 Jan 2015 13:35:43 -0800 (PST) Received: from mail-la0-f49.google.com (mail-la0-f49.google.com. [209.85.215.49]) by mx.google.com with ESMTPS id 2si10841063lai.59.2015.01.08.13.35.43 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 08 Jan 2015 13:35:43 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) client-ip=209.85.215.49; Received: by mail-la0-f49.google.com with SMTP id hs14so11543619lab.8 for ; Thu, 08 Jan 2015 13:35:43 -0800 (PST) X-Received: by 10.152.7.229 with SMTP id m5mr17950877laa.80.1420752943054; Thu, 08 Jan 2015 13:35:43 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.9.200 with SMTP id c8csp136414lbb; Thu, 8 Jan 2015 13:35:40 -0800 (PST) X-Received: by 10.224.135.193 with SMTP id o1mr9331503qat.97.1420752939336; Thu, 08 Jan 2015 13:35:39 -0800 (PST) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id 43si8891244qgf.32.2015.01.08.13.35.36 (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 08 Jan 2015 13:35:39 -0800 (PST) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Y9Kju-0001EN-2R; Thu, 08 Jan 2015 21:35:34 +0000 Received: from mail-la0-f50.google.com ([209.85.215.50]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Y9Kjg-0001DY-En for lng-odp@lists.linaro.org; Thu, 08 Jan 2015 21:35:20 +0000 Received: by mail-la0-f50.google.com with SMTP id pn19so11886138lab.9 for ; Thu, 08 Jan 2015 13:35:14 -0800 (PST) X-Received: by 10.112.72.197 with SMTP id f5mr17541615lbv.21.1420752914862; Thu, 08 Jan 2015 13:35:14 -0800 (PST) Received: from macmini.lan (78-82-118-111.tn.glocalnet.net. [78.82.118.111]) by mx.google.com with ESMTPSA id ss8sm1402002lbb.22.2015.01.08.13.35.12 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 08 Jan 2015 13:35:13 -0800 (PST) From: Ola Liljedahl To: lng-odp@lists.linaro.org Date: Thu, 8 Jan 2015 22:35:22 +0100 Message-Id: <1420752923-24198-3-git-send-email-ola.liljedahl@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1420752923-24198-1-git-send-email-ola.liljedahl@linaro.org> References: <1420752923-24198-1-git-send-email-ola.liljedahl@linaro.org> X-Topics: timers patch Subject: [lng-odp] [PATCHv4 2/3] api: odp_timer.h: updated API, lock-less implementation X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ola.liljedahl@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 The timer API is updated according to https://docs.google.com/a/linaro.org/document/d/1bfY_J8ecLJPsFTmYftb0NVmGnB9qkEc_NpcJ87yfaD8 A major change is that timers are allocated and freed separately from timeouts being set and cancelled. The life-length of a timer normally corresponds to the life-length of the associated stateful flow while the life-length of a timeout corresponds to individual packets being transmitted and received. The reference timer implementation is lock-less for platforms with support for 128-bit (16-byte) atomic exchange and CAS operations. Otherwise a lock-based implementation (using as many locks as desired) is used but some operations (e.g. reset reusing existing timeout buffer) may still be lock-less. Updated the example example/timer/odp_timer_test.c according to the updated API. Signed-off-by: Ola Liljedahl --- (This document/code contribution attached is provided under the terms of agreement LES-LTM-21309) Updated API and odp_timer_test.c with latest review comments from Petri S. example/timer/odp_timer_test.c | 183 ++-- platform/linux-generic/include/api/odp_timer.h | 321 ++++-- .../linux-generic/include/odp_timer_internal.h | 62 +- platform/linux-generic/odp_timer.c | 1054 ++++++++++++++------ 4 files changed, 1133 insertions(+), 487 deletions(-) diff --git a/example/timer/odp_timer_test.c b/example/timer/odp_timer_test.c index 2acf2fc..5de499b 100644 --- a/example/timer/odp_timer_test.c +++ b/example/timer/odp_timer_test.c @@ -26,7 +26,6 @@ #define MAX_WORKERS 32 /**< Max worker threads */ -#define MSG_POOL_SIZE (4*1024*1024) /**< Message pool size */ #define MSG_NUM_BUFS 10000 /**< Number of timers */ @@ -44,69 +43,125 @@ typedef struct { /** @private Barrier for test synchronisation */ static odp_barrier_t test_barrier; -/** @private Timer handle*/ -static odp_timer_t test_timer; +/** @private Buffer pool handle */ +static odp_buffer_pool_t pool; +/** @private Timer pool handle */ +static odp_timer_pool_t tp; + +/** @private Number of timeouts to receive */ +static odp_atomic_u32_t remain; + +/** @private Timer set status ASCII strings */ +static const char *timerset2str(odp_timer_set_t val) +{ + switch (val) { + case ODP_TIMER_SUCCESS: + return "success"; + case ODP_TIMER_TOOEARLY: + return "too early"; + case ODP_TIMER_TOOLATE: + return "too late"; + case ODP_TIMER_NOBUF: + return "no buffer"; + default: + return "?"; + } +}; + +/** @private Helper struct for timers */ +struct test_timer { + odp_timer_t tim; + odp_buffer_t buf; +}; + +/** @private Array of all timer helper structs */ +static struct test_timer tt[256]; /** @private test timeout */ static void test_abs_timeouts(int thr, test_args_t *args) { - uint64_t tick; uint64_t period; uint64_t period_ns; odp_queue_t queue; - odp_buffer_t buf; - int num; + uint64_t tick; + struct test_timer *ttp; EXAMPLE_DBG(" [%i] test_timeouts\n", thr); queue = odp_queue_lookup("timer_queue"); period_ns = args->period_us*ODP_TIME_USEC; - period = odp_timer_ns_to_tick(test_timer, period_ns); + period = odp_timer_ns_to_tick(tp, period_ns); EXAMPLE_DBG(" [%i] period %"PRIu64" ticks, %"PRIu64" ns\n", thr, period, period_ns); - tick = odp_timer_current_tick(test_timer); - - EXAMPLE_DBG(" [%i] current tick %"PRIu64"\n", thr, tick); + EXAMPLE_DBG(" [%i] current tick %"PRIu64"\n", thr, + odp_timer_current_tick(tp)); - tick += period; - - if (odp_timer_absolute_tmo(test_timer, tick, queue, ODP_BUFFER_INVALID) - == ODP_TIMER_TMO_INVALID){ - EXAMPLE_DBG("Timeout request failed\n"); + ttp = &tt[thr - 1]; /* Thread starts at 1 */ + ttp->tim = odp_timer_alloc(tp, queue, ttp); + if (ttp->tim == ODP_TIMER_INVALID) { + EXAMPLE_ERR("Failed to allocate timer\n"); return; } + ttp->buf = odp_buffer_alloc(pool); + if (ttp->buf == ODP_BUFFER_INVALID) { + EXAMPLE_ERR("Failed to allocate buffer\n"); + return; + } + tick = odp_timer_current_tick(tp); - num = args->tmo_count; - - while (1) { - odp_timeout_t tmo; + while ((int)odp_atomic_load_u32(&remain) > 0) { + odp_buffer_t buf; + odp_timer_set_t rc; - buf = odp_schedule_one(&queue, ODP_SCHED_WAIT); + tick += period; + rc = odp_timer_set_abs(ttp->tim, tick, &ttp->buf); + if (odp_unlikely(rc != ODP_TIMER_SUCCESS)) { + /* Too early or too late timeout requested */ + EXAMPLE_ABORT("odp_timer_set_abs() failed: %s\n", + timerset2str(rc)); + } - tmo = odp_timeout_from_buffer(buf); + /* Get the next expired timeout */ + /* Use 1.5 second timeout for scheduler */ + uint64_t sched_tmo = odp_schedule_wait_time(1500000000ULL); + buf = odp_schedule(&queue, sched_tmo); + /* Check if odp_schedule() timed out, possibly there are no + * remaining timeouts to receive */ + if (buf == ODP_BUFFER_INVALID) + continue; /* Re-check the remain counter */ + if (odp_buffer_type(buf) != ODP_BUFFER_TYPE_TIMEOUT) { + /* Not a default timeout buffer */ + EXAMPLE_ABORT("Unexpected buffer type (%u) received\n", + odp_buffer_type(buf)); + } + odp_timeout_t tmo = odp_timeout_from_buf(buf); tick = odp_timeout_tick(tmo); - + ttp = odp_timeout_user_ptr(tmo); + ttp->buf = buf; + if (!odp_timeout_fresh(tmo)) { + /* Not the expected expiration tick, timer has + * been reset or cancelled or freed */ + EXAMPLE_ABORT("Unexpected timeout received (timer %x, tick %"PRIu64")\n", + ttp->tim, tick); + } EXAMPLE_DBG(" [%i] timeout, tick %"PRIu64"\n", thr, tick); - odp_buffer_free(buf); - - num--; - - if (num == 0) - break; - - tick += period; - - odp_timer_absolute_tmo(test_timer, tick, - queue, ODP_BUFFER_INVALID); + odp_atomic_dec_u32(&remain); } - if (odp_queue_sched_type(queue) == ODP_SCHED_SYNC_ATOMIC) - odp_schedule_release_atomic(); + /* Cancel and free last timer used */ + (void)odp_timer_cancel(ttp->tim, &ttp->buf); + if (ttp->buf != ODP_BUFFER_INVALID) + odp_buffer_free(ttp->buf); + else + EXAMPLE_ERR("Lost timeout buffer at timer cancel\n"); + /* Since we have cancelled the timer, there is no timeout buffer to + * return from odp_timer_free() */ + (void)odp_timer_free(ttp->tim); } @@ -193,14 +248,14 @@ static void parse_args(int argc, char *argv[], test_args_t *args) /* defaults */ args->cpu_count = 0; /* all CPU's */ args->resolution_us = 10000; - args->min_us = args->resolution_us; + args->min_us = 0; args->max_us = 10000000; args->period_us = 1000000; args->tmo_count = 30; while (1) { opt = getopt_long(argc, argv, "+c:r:m:x:p:t:h", - longopts, &long_index); + longopts, &long_index); if (opt == -1) break; /* No more options */ @@ -244,13 +299,13 @@ int main(int argc, char *argv[]) odph_linux_pthread_t thread_tbl[MAX_WORKERS]; test_args_t args; int num_workers; - odp_buffer_pool_t pool; odp_queue_t queue; int first_cpu; uint64_t cycles, ns; odp_queue_param_t param; - odp_shm_t shm; odp_buffer_pool_param_t params; + odp_timer_pool_param_t tparams; + odp_timer_pool_info_t tpinfo; printf("\nODP timer example starts\n"); @@ -310,23 +365,43 @@ int main(int argc, char *argv[]) printf("timeouts: %i\n", args.tmo_count); /* - * Create message pool + * Create buffer pool for timeouts */ - shm = odp_shm_reserve("msg_pool", - MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); - params.buf_size = 0; params.buf_align = 0; params.num_bufs = MSG_NUM_BUFS; params.buf_type = ODP_BUFFER_TYPE_TIMEOUT; - pool = odp_buffer_pool_create("msg_pool", shm, ¶ms); + pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, ¶ms); if (pool == ODP_BUFFER_POOL_INVALID) { - EXAMPLE_ERR("Pool create failed.\n"); + EXAMPLE_ERR("Buffer pool create failed.\n"); return -1; } + tparams.res_ns = args.resolution_us*ODP_TIME_USEC; + tparams.min_tmo = args.min_us*ODP_TIME_USEC; + tparams.max_tmo = args.max_us*ODP_TIME_USEC; + tparams.num_timers = num_workers; /* One timer per worker */ + tparams.private = 0; /* Shared */ + tparams.clk_src = ODP_CLOCK_CPU; + tp = odp_timer_pool_create("timer_pool", &tparams); + if (tp == ODP_TIMER_POOL_INVALID) { + EXAMPLE_ERR("Timer pool create failed.\n"); + return -1; + } + odp_timer_pool_start(); + + odp_shm_print_all(); + (void)odp_timer_pool_info(tp, &tpinfo); + printf("Timer pool\n"); + printf("----------\n"); + printf(" name: %s\n", tpinfo.name); + printf(" resolution: %"PRIu64" ns\n", tpinfo.param.res_ns); + printf(" min tmo: %"PRIu64" ticks\n", tpinfo.param.min_tmo); + printf(" max tmo: %"PRIu64" ticks\n", tpinfo.param.max_tmo); + printf("\n"); + /* * Create a queue for timer test */ @@ -342,20 +417,7 @@ int main(int argc, char *argv[]) return -1; } - test_timer = odp_timer_create("test_timer", pool, - args.resolution_us*ODP_TIME_USEC, - args.min_us*ODP_TIME_USEC, - args.max_us*ODP_TIME_USEC); - - if (test_timer == ODP_TIMER_INVALID) { - EXAMPLE_ERR("Timer create failed.\n"); - return -1; - } - - - odp_shm_print_all(); - - printf("CPU freq %"PRIu64" hz\n", odp_sys_cpu_hz()); + printf("CPU freq %"PRIu64" Hz\n", odp_sys_cpu_hz()); printf("Cycles vs nanoseconds:\n"); ns = 0; cycles = odp_time_ns_to_cycles(ns); @@ -375,6 +437,9 @@ int main(int argc, char *argv[]) printf("\n"); + /* Initialize number of timeouts to receive */ + odp_atomic_init_u32(&remain, args.tmo_count * num_workers); + /* Barrier to sync test case execution */ odp_barrier_init(&test_barrier, num_workers); diff --git a/platform/linux-generic/include/api/odp_timer.h b/platform/linux-generic/include/api/odp_timer.h index 6cca27c..69402ef 100644 --- a/platform/linux-generic/include/api/odp_timer.h +++ b/platform/linux-generic/include/api/odp_timer.h @@ -8,7 +8,7 @@ /** * @file * - * ODP timer + * ODP timer service */ #ifndef ODP_TIMER_H_ @@ -18,149 +18,346 @@ extern "C" { #endif +#include #include #include -#include #include /** @defgroup odp_timer ODP TIMER * @{ */ +struct odp_timer_pool_s; /**< Forward declaration */ + +/** +* ODP timer pool handle (platform dependent) +*/ +typedef struct odp_timer_pool_s *odp_timer_pool_t; + /** - * ODP timer handle + * Invalid timer pool handle (platform dependent). */ +#define ODP_TIMER_POOL_INVALID NULL + +/** + * Clock sources for timers in timer pool. + */ +typedef enum { + /** Use CPU clock as clock source for timers */ + ODP_CLOCK_CPU, + /** Use external clock as clock source for timers */ + ODP_CLOCK_EXT + /* Platform dependent which other clock sources exist */ +} odp_timer_clk_src_t; + +/** +* ODP timer handle (platform dependent). +*/ typedef uint32_t odp_timer_t; -/** Invalid timer */ -#define ODP_TIMER_INVALID 0 +/** +* ODP timeout handle (platform dependent). +*/ +typedef void *odp_timeout_t; +/** + * Invalid timer handle (platform dependent). + */ +#define ODP_TIMER_INVALID ((uint32_t)~0U) /** - * ODP timeout handle + * Return values of timer set calls. + */ +typedef enum { +/** + * Timer set operation succeeded */ -typedef odp_buffer_t odp_timer_tmo_t; + ODP_TIMER_SUCCESS = 0, +/** + * Timer set operation failed, expiration too early. + * Either retry with a later expiration time or process the timeout + * immediately. */ + ODP_TIMER_TOOEARLY = -1, -/** Invalid timeout */ -#define ODP_TIMER_TMO_INVALID 0 +/** + * Timer set operation failed, expiration too late. + * Truncate the expiration time against the maximum timeout for the + * timer pool. */ + ODP_TIMER_TOOLATE = -2, +/** + * Timer set operation failed because no timeout buffer specified and no + * timeout buffer present in the timer (timer inactive/expired). + */ + ODP_TIMER_NOBUF = -3 +} odp_timer_set_t; +/** Maximum timer pool name length in chars (including null char) */ +#define ODP_TIMER_POOL_NAME_LEN 32 -/** - * Timeout notification +/** Timer pool parameters + * Timer pool parameters are used when creating and querying timer pools. */ -typedef odp_buffer_t odp_timeout_t; +typedef struct { + uint64_t res_ns; /**< Timeout resolution in nanoseconds */ + uint64_t min_tmo; /**< Minimum relative timeout in nanoseconds */ + uint64_t max_tmo; /**< Maximum relative timeout in nanoseconds */ + uint32_t num_timers; /**< (Minimum) number of supported timers */ + int private; /**< Shared (false) or private (true) timer pool */ + odp_timer_clk_src_t clk_src; /**< Clock source for timers */ +} odp_timer_pool_param_t; +/** + * Create a timer pool + * + * @param name Name of the timer pool. The string will be copied. + * @param params Timer pool parameters. The content will be copied. + * + * @return Timer pool handle if successful, otherwise ODP_TIMER_POOL_INVALID + * and errno set + */ +odp_timer_pool_t +odp_timer_pool_create(const char *name, + const odp_timer_pool_param_t *params); /** - * Create a timer + * Start a timer pool * - * Creates a new timer with requested properties. + * Start all created timer pools, enabling the allocation of timers. + * The purpose of this call is to coordinate the creation of multiple timer + * pools that may use the same underlying HW resources. + * This function may be called multiple times. + */ +void odp_timer_pool_start(void); + +/** + * Destroy a timer pool * - * @param name Name - * @param pool Buffer pool for allocating timeout notifications - * @param resolution Timeout resolution in nanoseconds - * @param min_tmo Minimum timeout duration in nanoseconds - * @param max_tmo Maximum timeout duration in nanoseconds + * Destroy a timer pool, freeing all resources. + * All timers must have been freed. * - * @return Timer handle if successful, otherwise ODP_TIMER_INVALID + * @param tpid Timer pool identifier */ -odp_timer_t odp_timer_create(const char *name, odp_buffer_pool_t pool, - uint64_t resolution, uint64_t min_tmo, - uint64_t max_tmo); +void odp_timer_pool_destroy(odp_timer_pool_t tpid); /** * Convert timer ticks to nanoseconds * - * @param timer Timer + * @param tpid Timer pool identifier * @param ticks Timer ticks * * @return Nanoseconds */ -uint64_t odp_timer_tick_to_ns(odp_timer_t timer, uint64_t ticks); +uint64_t odp_timer_tick_to_ns(odp_timer_pool_t tpid, uint64_t ticks); /** * Convert nanoseconds to timer ticks * - * @param timer Timer + * @param tpid Timer pool identifier * @param ns Nanoseconds * * @return Timer ticks */ -uint64_t odp_timer_ns_to_tick(odp_timer_t timer, uint64_t ns); +uint64_t odp_timer_ns_to_tick(odp_timer_pool_t tpid, uint64_t ns); /** - * Timer resolution in nanoseconds + * Current tick value * - * @param timer Timer + * @param tpid Timer pool identifier * - * @return Resolution in nanoseconds + * @return Current time in timer ticks + */ +uint64_t odp_timer_current_tick(odp_timer_pool_t tpid); + +/** + * ODP timer pool information and configuration */ -uint64_t odp_timer_resolution(odp_timer_t timer); + +typedef struct { + odp_timer_pool_param_t param; /**< Parameters specified at creation */ + uint32_t cur_timers; /**< Number of currently allocated timers */ + uint32_t hwm_timers; /**< High watermark of allocated timers */ + const char *name; /**< Name of timer pool */ +} odp_timer_pool_info_t; /** - * Maximum timeout in timer ticks + * Query timer pool configuration and current state * - * @param timer Timer + * @param tpid Timer pool identifier + * @param[out] info Pointer to information buffer * - * @return Maximum timeout in timer ticks + * @retval 0 Success + * @retval -1 Failure. Info could not be retrieved. */ -uint64_t odp_timer_maximum_tmo(odp_timer_t timer); +int odp_timer_pool_info(odp_timer_pool_t tpid, + odp_timer_pool_info_t *info); /** - * Current timer tick + * Allocate a timer * - * @param timer Timer + * Create a timer (allocating all necessary resources e.g. timeout event) from + * the timer pool. The user_ptr is copied to timeouts and can be retrieved + * using the odp_timeout_user_ptr() call. * - * @return Current time in timer ticks + * @param tpid Timer pool identifier + * @param queue Destination queue for timeout notifications + * @param user_ptr User defined pointer or NULL to be copied to timeouts + * + * @return Timer handle if successful, otherwise ODP_TIMER_INVALID and + * errno set. */ -uint64_t odp_timer_current_tick(odp_timer_t timer); +odp_timer_t odp_timer_alloc(odp_timer_pool_t tpid, + odp_queue_t queue, + void *user_ptr); /** - * Request timeout with an absolute timer tick + * Free a timer * - * When tick reaches tmo_tick, the timer enqueues the timeout notification into - * the destination queue. + * Free (destroy) a timer, reclaiming associated resources. + * The timeout buffer for an active timer will be returned. + * The timeout buffer for an expired timer will not be returned. It is the + * responsibility of the application to handle this timeout when it is received. * - * @param timer Timer - * @param tmo_tick Absolute timer tick value which triggers the timeout - * @param queue Destination queue for the timeout notification - * @param buf User defined timeout notification buffer. When - * ODP_BUFFER_INVALID, default timeout notification is used. + * @param tim Timer handle + * @return Buffer handle of timeout buffer or ODP_BUFFER_INVALID + */ +odp_buffer_t odp_timer_free(odp_timer_t tim); + +/** + * Set a timer (absolute time) with a user-provided timeout buffer + * + * Set (arm) the timer to expire at specific time. The timeout + * buffer will be enqueued when the timer expires. + * + * Note: any invalid parameters will be treated as programming errors and will + * cause the application to abort. + * + * @param tim Timer + * @param abs_tck Expiration time in absolute timer ticks + * @param[in,out] tmo_buf Reference to a buffer variable that points to + * timeout buffer or NULL to reuse the existing timeout buffer. Any existing + * timeout buffer that is replaced by a successful set operation will be + * returned here. + * + * @retval ODP_TIMER_SUCCESS Operation succeeded + * @retval ODP_TIMER_TOOEARLY Operation failed because expiration tick too + * early + * @retval ODP_TIMER_TOOLATE Operation failed because expiration tick too + * late + * @retval ODP_TIMER_NOBUF Operation failed because timeout buffer not + * specified in odp_timer_set call and not present in timer + */ +int odp_timer_set_abs(odp_timer_t tim, + uint64_t abs_tck, + odp_buffer_t *tmo_buf); + +/** + * Set a timer with a relative expiration time and user-provided buffer. + * + * Set (arm) the timer to expire at a relative future time. + * + * Note: any invalid parameters will be treated as programming errors and will + * cause the application to abort. + * + * @param tim Timer + * @param rel_tck Expiration time in timer ticks relative to current time of + * the timer pool the timer belongs to + * @param[in,out] tmo_buf Reference to a buffer variable that points to + * timeout buffer or NULL to reuse the existing timeout buffer. Any existing + * timeout buffer that is replaced by a successful set operation will be + * returned here. + * + * @retval ODP_TIMER_SUCCESS Operation succeeded + * @retval ODP_TIMER_TOOEARLY Operation failed because expiration tick too + * early + * @retval ODP_TIMER_TOOLATE Operation failed because expiration tick too + * late + * @retval ODP_TIMER_NOBUF Operation failed because timeout buffer not + * specified in call and not present in timer + */ +int odp_timer_set_rel(odp_timer_t tim, + uint64_t rel_tck, + odp_buffer_t *tmo_buf); + +/** + * Cancel a timer * - * @return Timeout handle if successful, otherwise ODP_TIMER_TMO_INVALID + * Cancel a timer, preventing future expiration and delivery. Return any + * present timeout buffer. + * + * A timer that has already expired may be impossible to cancel and the timeout + * will instead be delivered to the destination queue. + * + * Note: any invalid parameters will be treated as programming errors and will + * cause the application to abort. + * + * @param tim Timer + * @param[out] tmo_buf Pointer to a buffer variable + * @retval 0 Success, active timer cancelled, timeout returned in '*tmo_buf' + * @retval -1 Failure, timer already expired (or inactive) */ -odp_timer_tmo_t odp_timer_absolute_tmo(odp_timer_t timer, uint64_t tmo_tick, - odp_queue_t queue, odp_buffer_t buf); +int odp_timer_cancel(odp_timer_t tim, odp_buffer_t *tmo_buf); /** - * Cancel a timeout + * Return timeout handle that is associated with timeout buffer + * + * Note: any invalid parameters will cause undefined behavior and may cause + * the application to abort or crash. * - * @param timer Timer - * @param tmo Timeout to cancel + * @param buf A buffer of type ODP_BUFFER_TYPE_TIMEOUT + * + * @return timeout handle + */ +odp_timeout_t odp_timeout_from_buf(odp_buffer_t buf); + +/** + * Check for fresh timeout + * If the corresponding timer has been reset or cancelled since this timeout + * was enqueued, the timeout is stale (not fresh). * - * @return 0 if successful + * @param tmo Timeout handle + * @retval 1 Timeout is fresh + * @retval 0 Timeout is stale */ -int odp_timer_cancel_tmo(odp_timer_t timer, odp_timer_tmo_t tmo); +int odp_timeout_fresh(odp_timeout_t tmo); /** - * Convert buffer handle to timeout handle + * Return timer handle for the timeout * - * @param buf Buffer handle + * Note: any invalid parameters will cause undefined behavior and may cause + * the application to abort or crash. * - * @return Timeout buffer handle + * @param tmo Timeout handle + * + * @return Timer handle */ -odp_timeout_t odp_timeout_from_buffer(odp_buffer_t buf); +odp_timer_t odp_timeout_timer(odp_timeout_t tmo); /** - * Return absolute timeout tick + * Return expiration tick for the timeout + * + * Note: any invalid parameters will cause undefined behavior and may cause + * the application to abort or crash. * - * @param tmo Timeout buffer handle + * @param tmo Timeout handle * - * @return Absolute timeout tick + * @return Expiration tick */ uint64_t odp_timeout_tick(odp_timeout_t tmo); /** + * Return user pointer for the timeout + * The user pointer was specified when the timer was allocated. + * + * Note: any invalid parameters will cause undefined behavior and may cause + * the application to abort or crash. + * + * @param tmo Timeout handle + * + * @return User pointer + */ +void *odp_timeout_user_ptr(odp_timeout_t tmo); + +/** * @} */ diff --git a/platform/linux-generic/include/odp_timer_internal.h b/platform/linux-generic/include/odp_timer_internal.h index 0d10d00..ed5ee7e 100644 --- a/platform/linux-generic/include/odp_timer_internal.h +++ b/platform/linux-generic/include/odp_timer_internal.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2013, Linaro Limited +/* Copyright (c) 2014, Linaro Limited * All rights reserved. * * SPDX-License-Identifier: BSD-3-Clause @@ -8,74 +8,44 @@ /** * @file * - * ODP timer timeout descriptor - implementation internal + * ODP timeout descriptor - implementation internal */ #ifndef ODP_TIMER_INTERNAL_H_ #define ODP_TIMER_INTERNAL_H_ -#ifdef __cplusplus -extern "C" { -#endif - -#include -#include -#include +#include +#include #include #include #include -struct timeout_t; - -typedef struct timeout_t { - struct timeout_t *next; - int timer_id; - int tick; - uint64_t tmo_tick; - odp_queue_t queue; - odp_buffer_t buf; - odp_buffer_t tmo_buf; -} timeout_t; - - -struct odp_timeout_hdr_t; - /** - * Timeout notification header + * Internal Timeout header */ -typedef struct odp_timeout_hdr_t { +typedef struct { + /* common buffer header */ odp_buffer_hdr_t buf_hdr; - timeout_t meta; - - uint8_t buf_data[]; + /* Requested expiration time */ + uint64_t expiration; + /* User ptr inherited from parent timer */ + void *user_ptr; + /* Parent timer */ + odp_timer_t timer; } odp_timeout_hdr_t; typedef struct odp_timeout_hdr_stride { uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_timeout_hdr_t))]; } odp_timeout_hdr_stride; -_ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) == - ODP_OFFSETOF(odp_timeout_hdr_t, buf_data), - "ODP_TIMEOUT_HDR_T__SIZE_ERR"); - -_ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) % sizeof(uint64_t) == 0, - "ODP_TIMEOUT_HDR_T__SIZE_ERR2"); - /** - * Return timeout header + * Return the timeout header */ -static inline odp_timeout_hdr_t *odp_timeout_hdr(odp_timeout_t tmo) +static inline odp_timeout_hdr_t *odp_timeout_hdr(odp_buffer_t buf) { - odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)tmo); - return (odp_timeout_hdr_t *)(uintptr_t)buf_hdr; + return (odp_timeout_hdr_t *)odp_buf_to_hdr(buf); } - - -#ifdef __cplusplus -} -#endif - #endif diff --git a/platform/linux-generic/odp_timer.c b/platform/linux-generic/odp_timer.c index 65b44b9..ef26b02 100644 --- a/platform/linux-generic/odp_timer.c +++ b/platform/linux-generic/odp_timer.c @@ -4,430 +4,844 @@ * SPDX-License-Identifier: BSD-3-Clause */ -#include -#include -#include +/** + * @file + * + * ODP timer service + * + */ + +/* Check if compiler supports 16-byte atomics. GCC needs -mcx16 flag on x86 */ +/* Using spin lock actually seems faster on Core2 */ +#ifdef ODP_ATOMIC_U128 +/* TB_NEEDS_PAD defined if sizeof(odp_buffer_t) != 8 */ +#define TB_NEEDS_PAD +#define TB_SET_PAD(x) ((x).pad = 0) +#else +#define TB_SET_PAD(x) (void)(x) +#endif + +/* For snprint, POSIX timers and sigevent */ +#define _POSIX_C_SOURCE 200112L +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include #include -#include +#include +#include +#include +#include +#include #include -#include +#include +#include +#include #include +#include #include -#include -#include +#include +#include +#include -#include -#include +#define TMO_UNUSED ((uint64_t)0xFFFFFFFFFFFFFFFF) +/* TMO_INACTIVE is or-ed with the expiration tick to indicate an expired timer. + * The original expiration tick (63 bits) is still available so it can be used + * for checking the freshness of received timeouts */ +#define TMO_INACTIVE ((uint64_t)0x8000000000000000) + +#ifdef __ARM_ARCH +#define PREFETCH(ptr) __builtin_prefetch((ptr), 0, 0) +#else +#define PREFETCH(ptr) (void)(ptr) +#endif + +/****************************************************************************** + * Mutual exclusion in the absence of CAS16 + *****************************************************************************/ + +#ifndef ODP_ATOMIC_U128 +#define NUM_LOCKS 1024 +static _odp_atomic_flag_t locks[NUM_LOCKS]; /* Multiple locks per cache line! */ +#define IDX2LOCK(idx) (&locks[(idx) % NUM_LOCKS]) +#endif + +/****************************************************************************** + * Translation between timeout buffer and timeout header + *****************************************************************************/ + +static odp_timeout_hdr_t *timeout_hdr_from_buf(odp_buffer_t buf) +{ + return (odp_timeout_hdr_t *)odp_buf_to_hdr(buf); +} -#include +/****************************************************************************** + * odp_timer abstract datatype + *****************************************************************************/ + +typedef struct tick_buf_s { + odp_atomic_u64_t exp_tck;/* Expiration tick or TMO_xxx */ + odp_buffer_t tmo_buf;/* ODP_BUFFER_INVALID if timer not active */ +#ifdef TB_NEEDS_PAD + uint32_t pad;/* Need to be able to access padding for successful CAS */ +#endif +} tick_buf_t +#ifdef ODP_ATOMIC_U128 +ODP_ALIGNED(16) /* 16-byte atomic operations need properly aligned addresses */ +#endif +; + +_ODP_STATIC_ASSERT(sizeof(tick_buf_t) == 16, "sizeof(tick_buf_t) == 16"); + +typedef struct odp_timer_s { + void *user_ptr; + odp_queue_t queue;/* Used for free list when timer is free */ +} odp_timer; + +static void timer_init(odp_timer *tim, + tick_buf_t *tb, + odp_queue_t _q, + void *_up) +{ + tim->queue = _q; + tim->user_ptr = _up; + tb->tmo_buf = ODP_BUFFER_INVALID; + /* All pad fields need a defined and constant value */ + TB_SET_PAD(*tb); + /* Release the timer by setting timer state to inactive */ + _odp_atomic_u64_store_mm(&tb->exp_tck, TMO_INACTIVE, _ODP_MEMMODEL_RLS); +} -#define NUM_TIMERS 1 -#define MAX_TICKS 1024 -#define MAX_RES ODP_TIME_SEC -#define MIN_RES (100*ODP_TIME_USEC) +/* Teardown when timer is freed */ +static void timer_fini(odp_timer *tim, tick_buf_t *tb) +{ + assert(tb->exp_tck.v == TMO_UNUSED); + assert(tb->tmo_buf == ODP_BUFFER_INVALID); + tim->queue = ODP_QUEUE_INVALID; + tim->user_ptr = NULL; +} +static inline uint32_t get_next_free(odp_timer *tim) +{ + /* Reusing 'queue' for next free index */ + return tim->queue; +} -typedef struct { - odp_spinlock_t lock; - timeout_t *list; -} tick_t; - -typedef struct { - int allocated; - volatile int active; - volatile uint64_t cur_tick; - timer_t timerid; - odp_timer_t timer_hdl; - odp_buffer_pool_t pool; - uint64_t resolution_ns; - uint64_t max_ticks; - tick_t tick[MAX_TICKS]; - -} timer_ring_t; - -typedef struct { - odp_spinlock_t lock; - int num_timers; - timer_ring_t timer[NUM_TIMERS]; +static inline void set_next_free(odp_timer *tim, uint32_t nf) +{ + assert(tim->queue == ODP_QUEUE_INVALID); + /* Reusing 'queue' for next free index */ + tim->queue = nf; +} -} timer_global_t; +/****************************************************************************** + * odp_timer_pool abstract datatype + * Inludes alloc and free timer + *****************************************************************************/ + +typedef struct odp_timer_pool_s { +/* Put frequently accessed fields in the first cache line */ + odp_atomic_u64_t cur_tick;/* Current tick value */ + uint64_t min_rel_tck; + uint64_t max_rel_tck; + tick_buf_t *tick_buf; /* Expiration tick and timeout buffer */ + odp_timer *timers; /* User pointer and queue handle (and lock) */ + odp_atomic_u32_t high_wm;/* High watermark of allocated timers */ + odp_spinlock_t itimer_running; + odp_spinlock_t lock; + uint32_t num_alloc;/* Current number of allocated timers */ + uint32_t first_free;/* 0..max_timers-1 => free timer */ + uint32_t tp_idx;/* Index into timer_pool array */ + odp_timer_pool_param_t param; + char name[ODP_TIMER_POOL_NAME_LEN]; + odp_shm_t shm; + timer_t timerid; +} odp_timer_pool; + +#define MAX_TIMER_POOLS 255 /* Leave one for ODP_TIMER_INVALID */ +#define INDEX_BITS 24 +static odp_atomic_u32_t num_timer_pools; +static odp_timer_pool *timer_pool[MAX_TIMER_POOLS]; + +static inline odp_timer_pool *handle_to_tp(odp_timer_t hdl) +{ + uint32_t tp_idx = hdl >> INDEX_BITS; + if (odp_likely(tp_idx < MAX_TIMER_POOLS)) { + odp_timer_pool *tp = timer_pool[tp_idx]; + if (odp_likely(tp != NULL)) + return timer_pool[tp_idx]; + } + ODP_ABORT("Invalid timer handle %#x\n", hdl); +} -/* Global */ -static timer_global_t odp_timer; +static inline uint32_t handle_to_idx(odp_timer_t hdl, + struct odp_timer_pool_s *tp) +{ + uint32_t idx = hdl & ((1U << INDEX_BITS) - 1U); + PREFETCH(&tp->tick_buf[idx]); + if (odp_likely(idx < odp_atomic_load_u32(&tp->high_wm))) + return idx; + ODP_ABORT("Invalid timer handle %#x\n", hdl); +} -static void add_tmo(tick_t *tick, timeout_t *tmo) +static inline odp_timer_t tp_idx_to_handle(struct odp_timer_pool_s *tp, + uint32_t idx) { - odp_spinlock_lock(&tick->lock); + assert(idx < (1U << INDEX_BITS)); + return (tp->tp_idx << INDEX_BITS) | idx; +} - tmo->next = tick->list; - tick->list = tmo; +/* Forward declarations */ +static void itimer_init(odp_timer_pool *tp); +static void itimer_fini(odp_timer_pool *tp); + +static odp_timer_pool *odp_timer_pool_new( + const char *_name, + const odp_timer_pool_param_t *param) +{ + uint32_t tp_idx = odp_atomic_fetch_add_u32(&num_timer_pools, 1); + if (odp_unlikely(tp_idx >= MAX_TIMER_POOLS)) { + /* Restore the previous value */ + odp_atomic_sub_u32(&num_timer_pools, 1); + errno = ENFILE; /* Table overflow */ + return NULL; + } + size_t sz0 = ODP_ALIGN_ROUNDUP(sizeof(odp_timer_pool), + ODP_CACHE_LINE_SIZE); + size_t sz1 = ODP_ALIGN_ROUNDUP(sizeof(tick_buf_t) * param->num_timers, + ODP_CACHE_LINE_SIZE); + size_t sz2 = ODP_ALIGN_ROUNDUP(sizeof(odp_timer) * param->num_timers, + ODP_CACHE_LINE_SIZE); + odp_shm_t shm = odp_shm_reserve(_name, sz0 + sz1 + sz2, + ODP_CACHE_LINE_SIZE, ODP_SHM_SW_ONLY); + if (odp_unlikely(shm == ODP_SHM_INVALID)) + ODP_ABORT("%s: timer pool shm-alloc(%zuKB) failed\n", + _name, (sz0 + sz1 + sz2) / 1024); + odp_timer_pool *tp = (odp_timer_pool *)odp_shm_addr(shm); + odp_atomic_init_u64(&tp->cur_tick, 0); + snprintf(tp->name, sizeof(tp->name), "%s", _name); + tp->shm = shm; + tp->param = *param; + tp->min_rel_tck = odp_timer_ns_to_tick(tp, param->min_tmo); + tp->max_rel_tck = odp_timer_ns_to_tick(tp, param->max_tmo); + tp->num_alloc = 0; + odp_atomic_init_u32(&tp->high_wm, 0); + tp->first_free = 0; + tp->tick_buf = (void *)((char *)odp_shm_addr(shm) + sz0); + tp->timers = (void *)((char *)odp_shm_addr(shm) + sz0 + sz1); + /* Initialize all odp_timer entries */ + uint32_t i; + for (i = 0; i < tp->param.num_timers; i++) { + set_next_free(&tp->timers[i], i + 1); + tp->timers[i].user_ptr = NULL; + odp_atomic_init_u64(&tp->tick_buf[i].exp_tck, TMO_UNUSED); + tp->tick_buf[i].tmo_buf = ODP_BUFFER_INVALID; + } + tp->tp_idx = tp_idx; + odp_spinlock_init(&tp->lock); + odp_spinlock_init(&tp->itimer_running); + timer_pool[tp_idx] = tp; + if (tp->param.clk_src == ODP_CLOCK_CPU) + itimer_init(tp); + return tp; +} - odp_spinlock_unlock(&tick->lock); +static void odp_timer_pool_del(odp_timer_pool *tp) +{ + odp_spinlock_lock(&tp->lock); + timer_pool[tp->tp_idx] = NULL; + /* Wait for itimer thread to stop running */ + odp_spinlock_lock(&tp->itimer_running); + if (tp->num_alloc != 0) { + /* It's a programming error to attempt to destroy a */ + /* timer pool which is still in use */ + ODP_ABORT("%s: timers in use\n", tp->name); + } + if (tp->param.clk_src == ODP_CLOCK_CPU) + itimer_fini(tp); + int rc = odp_shm_free(tp->shm); + if (rc != 0) + ODP_ABORT("Failed to free shared memory (%d)\n", rc); } -static timeout_t *rem_tmo(tick_t *tick) +static inline odp_timer_t timer_alloc(odp_timer_pool *tp, + odp_queue_t queue, + void *user_ptr) { - timeout_t *tmo; + odp_timer_t hdl; + odp_spinlock_lock(&tp->lock); + if (odp_likely(tp->num_alloc < tp->param.num_timers)) { + tp->num_alloc++; + /* Remove first unused timer from free list */ + assert(tp->first_free != tp->param.num_timers); + uint32_t idx = tp->first_free; + odp_timer *tim = &tp->timers[idx]; + tp->first_free = get_next_free(tim); + /* Initialize timer */ + timer_init(tim, &tp->tick_buf[idx], queue, user_ptr); + if (odp_unlikely(tp->num_alloc > + odp_atomic_load_u32(&tp->high_wm))) + /* Update high_wm last with release model to + * ensure timer initialization is visible */ + _odp_atomic_u32_store_mm(&tp->high_wm, + tp->num_alloc, + _ODP_MEMMODEL_RLS); + hdl = tp_idx_to_handle(tp, idx); + } else { + errno = ENFILE; /* Reusing file table overflow */ + hdl = ODP_TIMER_INVALID; + } + odp_spinlock_unlock(&tp->lock); + return hdl; +} - odp_spinlock_lock(&tick->lock); +static odp_buffer_t timer_cancel(odp_timer_pool *tp, + uint32_t idx, + uint64_t new_state); - tmo = tick->list; +static inline odp_buffer_t timer_free(odp_timer_pool *tp, uint32_t idx) +{ + odp_timer *tim = &tp->timers[idx]; - if (tmo) - tick->list = tmo->next; + /* Free the timer by setting timer state to unused and + * grab any timeout buffer */ + odp_buffer_t old_buf = timer_cancel(tp, idx, TMO_UNUSED); - odp_spinlock_unlock(&tick->lock); + /* Destroy timer */ + timer_fini(tim, &tp->tick_buf[idx]); - if (tmo) - tmo->next = NULL; + /* Insert timer into free list */ + odp_spinlock_lock(&tp->lock); + set_next_free(tim, tp->first_free); + tp->first_free = idx; + assert(tp->num_alloc != 0); + tp->num_alloc--; + odp_spinlock_unlock(&tp->lock); - return tmo; + return old_buf; } -/** - * Search and delete tmo entry from timeout list - * return -1 : on error.. handle not in list - * 0 : success - */ -static int find_and_del_tmo(timeout_t **tmo, odp_timer_tmo_t handle) -{ - timeout_t *cur, *prev; - prev = NULL; +/****************************************************************************** + * Operations on timers + * expire/reset/cancel timer + *****************************************************************************/ - for (cur = *tmo; cur != NULL; prev = cur, cur = cur->next) { - if (cur->tmo_buf == handle) { - if (prev == NULL) - *tmo = cur->next; - else - prev->next = cur->next; - - break; +static bool timer_reset(uint32_t idx, + uint64_t abs_tck, + odp_buffer_t *tmo_buf, + odp_timer_pool *tp) +{ + bool success = true; + tick_buf_t *tb = &tp->tick_buf[idx]; + + if (tmo_buf == NULL || *tmo_buf == ODP_BUFFER_INVALID) { +#ifdef ODP_ATOMIC_U128 + tick_buf_t new, old; + do { + /* Relaxed and non-atomic read of current values */ + old.exp_tck.v = tb->exp_tck.v; + old.tmo_buf = tb->tmo_buf; + TB_SET_PAD(old); + /* Check if there actually is a timeout buffer + * present */ + if (old.tmo_buf == ODP_BUFFER_INVALID) { + /* Cannot reset a timer with neither old nor + * new timeout buffer */ + success = false; + break; + } + /* Set up new values */ + new.exp_tck.v = abs_tck; + new.tmo_buf = old.tmo_buf; + TB_SET_PAD(new); + /* Atomic CAS will fail if we experienced torn reads, + * retry update sequence until CAS succeeds */ + } while (!_odp_atomic_u128_cmp_xchg_mm( + (_odp_atomic_u128_t *)tb, + (_uint128_t *)&old, + (_uint128_t *)&new, + _ODP_MEMMODEL_RLS, + _ODP_MEMMODEL_RLX)); +#else +#ifdef __ARM_ARCH + /* Since barriers are not good for C-A15, we take an + * alternative approach using relaxed memory model */ + uint64_t old; + /* Swap in new expiration tick, get back old tick which + * will indicate active/inactive timer state */ + old = _odp_atomic_u64_xchg_mm(&tb->exp_tck, abs_tck, + _ODP_MEMMODEL_RLX); + if ((old & TMO_INACTIVE) != 0) { + /* Timer was inactive (cancelled or expired), + * we can't reset a timer without a timeout buffer. + * Attempt to restore inactive state, we don't + * want this timer to continue as active without + * timeout as this will trigger unnecessary and + * aborted expiration attempts. + * We don't care if we fail, then some other thread + * reset or cancelled the timer. Without any + * synchronization between the threads, we have a + * data race and the behavior is undefined */ + (void)_odp_atomic_u64_cmp_xchg_strong_mm( + &tb->exp_tck, + &abs_tck, + old, + _ODP_MEMMODEL_RLX, + _ODP_MEMMODEL_RLX); + success = false; + } +#else + /* Take a related lock */ + while (_odp_atomic_flag_tas(IDX2LOCK(idx))) + /* While lock is taken, spin using relaxed loads */ + while (_odp_atomic_flag_load(IDX2LOCK(idx))) + odp_spin(); + + /* Only if there is a timeout buffer can be reset the timer */ + if (odp_likely(tb->tmo_buf != ODP_BUFFER_INVALID)) { + /* Write the new expiration tick */ + tb->exp_tck.v = abs_tck; + } else { + /* Cannot reset a timer with neither old nor new + * timeout buffer */ + success = false; } - } - - if (!cur) - /* couldn't find tmo in list */ - return -1; - /* application to free tmo_buf provided by absolute_tmo call */ - return 0; + /* Release the lock */ + _odp_atomic_flag_clear(IDX2LOCK(idx)); +#endif +#endif + } else { + /* We have a new timeout buffer which replaces any old one */ + odp_buffer_t old_buf = ODP_BUFFER_INVALID; +#ifdef ODP_ATOMIC_U128 + tick_buf_t new, old; + new.exp_tck.v = abs_tck; + new.tmo_buf = *tmo_buf; + TB_SET_PAD(new); + /* We are releasing the new timeout buffer to some other + * thread */ + _odp_atomic_u128_xchg_mm((_odp_atomic_u128_t *)tb, + (_uint128_t *)&new, + (_uint128_t *)&old, + _ODP_MEMMODEL_ACQ_RLS); + old_buf = old.tmo_buf; +#else + /* Take a related lock */ + while (_odp_atomic_flag_tas(IDX2LOCK(idx))) + /* While lock is taken, spin using relaxed loads */ + while (_odp_atomic_flag_load(IDX2LOCK(idx))) + odp_spin(); + + /* Swap in new buffer, save any old buffer */ + old_buf = tb->tmo_buf; + tb->tmo_buf = *tmo_buf; + + /* Write the new expiration tick */ + tb->exp_tck.v = abs_tck; + + /* Release the lock */ + _odp_atomic_flag_clear(IDX2LOCK(idx)); +#endif + /* Return old timeout buffer */ + *tmo_buf = old_buf; + } + return success; } -int odp_timer_cancel_tmo(odp_timer_t timer_hdl, odp_timer_tmo_t tmo) +static odp_buffer_t timer_cancel(odp_timer_pool *tp, + uint32_t idx, + uint64_t new_state) { - int id; - int tick_idx; - timeout_t *cancel_tmo; - odp_timeout_hdr_t *tmo_hdr; - tick_t *tick; - - /* get id */ - id = (int)timer_hdl - 1; - - tmo_hdr = odp_timeout_hdr((odp_timeout_t) tmo); - /* get tmo_buf to cancel */ - cancel_tmo = &tmo_hdr->meta; - - tick_idx = cancel_tmo->tick; - tick = &odp_timer.timer[id].tick[tick_idx]; + tick_buf_t *tb = &tp->tick_buf[idx]; + odp_buffer_t old_buf; + +#ifdef ODP_ATOMIC_U128 + tick_buf_t new, old; + /* Update the timer state (e.g. cancel the current timeout) */ + new.exp_tck.v = new_state; + /* Swap out the old buffer */ + new.tmo_buf = ODP_BUFFER_INVALID; + TB_SET_PAD(new); + _odp_atomic_u128_xchg_mm((_odp_atomic_u128_t *)tb, + (_uint128_t *)&new, (_uint128_t *)&old, + _ODP_MEMMODEL_RLX); + old_buf = old.tmo_buf; +#else + /* Take a related lock */ + while (_odp_atomic_flag_tas(IDX2LOCK(idx))) + /* While lock is taken, spin using relaxed loads */ + while (_odp_atomic_flag_load(IDX2LOCK(idx))) + odp_spin(); + + /* Update the timer state (e.g. cancel the current timeout) */ + tb->exp_tck.v = new_state; + + /* Swap out the old buffer */ + old_buf = tb->tmo_buf; + tb->tmo_buf = ODP_BUFFER_INVALID; + + /* Release the lock */ + _odp_atomic_flag_clear(IDX2LOCK(idx)); +#endif + /* Return the old buffer */ + return old_buf; +} - odp_spinlock_lock(&tick->lock); - /* search and delete tmo from tick list */ - if (find_and_del_tmo(&tick->list, tmo) != 0) { - odp_spinlock_unlock(&tick->lock); - ODP_DBG("Couldn't find the tmo (%d) in tick list\n", (int)tmo); - return -1; +static unsigned timer_expire(odp_timer_pool *tp, uint32_t idx, uint64_t tick) +{ + odp_timer *tim = &tp->timers[idx]; + tick_buf_t *tb = &tp->tick_buf[idx]; + odp_buffer_t tmo_buf = ODP_BUFFER_INVALID; + uint64_t exp_tck; +#ifdef ODP_ATOMIC_U128 + /* Atomic re-read for correctness */ + exp_tck = _odp_atomic_u64_load_mm(&tb->exp_tck, _ODP_MEMMODEL_RLX); + /* Re-check exp_tck */ + if (odp_likely(exp_tck <= tick)) { + /* Attempt to grab timeout buffer, replace with inactive timer + * and invalid buffer */ + tick_buf_t new, old; + old.exp_tck.v = exp_tck; + old.tmo_buf = tb->tmo_buf; + TB_SET_PAD(old); + /* Set the inactive/expired bit keeping the expiration tick so + * that we can check against the expiration tick of the timeout + * when it is received */ + new.exp_tck.v = exp_tck | TMO_INACTIVE; + new.tmo_buf = ODP_BUFFER_INVALID; + TB_SET_PAD(new); + int succ = _odp_atomic_u128_cmp_xchg_mm( + (_odp_atomic_u128_t *)tb, + (_uint128_t *)&old, (_uint128_t *)&new, + _ODP_MEMMODEL_RLS, _ODP_MEMMODEL_RLX); + if (succ) + tmo_buf = old.tmo_buf; + /* Else CAS failed, something changed => skip timer + * this tick, it will be checked again next tick */ + } + /* Else false positive, ignore */ +#else + /* Take a related lock */ + while (_odp_atomic_flag_tas(IDX2LOCK(idx))) + /* While lock is taken, spin using relaxed loads */ + while (_odp_atomic_flag_load(IDX2LOCK(idx))) + odp_spin(); + /* Proper check for timer expired */ + exp_tck = tb->exp_tck.v; + if (odp_likely(exp_tck <= tick)) { + /* Verify that there is a timeout buffer */ + if (odp_likely(tb->tmo_buf != ODP_BUFFER_INVALID)) { + /* Grab timeout buffer, replace with inactive timer + * and invalid buffer */ + tmo_buf = tb->tmo_buf; + tb->tmo_buf = ODP_BUFFER_INVALID; + /* Set the inactive/expired bit keeping the expiration + * tick so that we can check against the expiration + * tick of the timeout when it is received */ + tb->exp_tck.v |= TMO_INACTIVE; + } + /* Else somehow active timer without user buffer */ + } + /* Else false positive, ignore */ + /* Release the lock */ + _odp_atomic_flag_clear(IDX2LOCK(idx)); +#endif + if (odp_likely(tmo_buf != ODP_BUFFER_INVALID)) { + /* Fill in metadata fields in system timeout buffer */ + if (odp_buffer_type(tmo_buf) == ODP_BUFFER_TYPE_TIMEOUT) { + /* Convert from buffer to timeout hdr */ + odp_timeout_hdr_t *tmo_hdr = + timeout_hdr_from_buf(tmo_buf); + tmo_hdr->timer = tp_idx_to_handle(tp, idx); + tmo_hdr->expiration = exp_tck; + tmo_hdr->user_ptr = tim->user_ptr; + } + /* Else ignore buffers of other types */ + /* Post the timeout to the destination queue */ + int rc = odp_queue_enq(tim->queue, tmo_buf); + if (odp_unlikely(rc != 0)) + ODP_ABORT("Failed to enqueue timeout buffer (%d)\n", + rc); + return 1; + } else { + /* Else false positive, ignore */ + return 0; } - odp_spinlock_unlock(&tick->lock); - - return 0; } -static void notify_function(union sigval sigval) +static unsigned odp_timer_pool_expire(odp_timer_pool_t tpid, uint64_t tick) { - uint64_t cur_tick; - timeout_t *tmo; - tick_t *tick; - timer_ring_t *timer; - - timer = sigval.sival_ptr; - - if (timer->active == 0) { - ODP_DBG("Timer (%u) not active\n", timer->timer_hdl); - return; + tick_buf_t *array = &tpid->tick_buf[0]; + uint32_t high_wm = _odp_atomic_u32_load_mm(&tpid->high_wm, + _ODP_MEMMODEL_ACQ); + unsigned nexp = 0; + uint32_t i; + + assert(high_wm <= tpid->param.num_timers); + for (i = 0; i < high_wm;) { +#ifdef __ARM_ARCH + /* As a rare occurence, we can outsmart the HW prefetcher + * and the compiler (GCC -fprefetch-loop-arrays) with some + * tuned manual prefetching (32x16=512B ahead), seems to + * give 30% better performance on ARM C-A15 */ + PREFETCH(&array[i + 32]); +#endif + /* Non-atomic read for speed */ + uint64_t exp_tck = array[i++].exp_tck.v; + if (odp_unlikely(exp_tck <= tick)) { + /* Attempt to expire timer */ + nexp += timer_expire(tpid, i - 1, tick); + } } + return nexp; +} - /* ODP_DBG("Tick\n"); */ - - cur_tick = timer->cur_tick++; - - odp_sync_stores(); - - tick = &timer->tick[cur_tick % MAX_TICKS]; - - while ((tmo = rem_tmo(tick)) != NULL) { - odp_queue_t queue; - odp_buffer_t buf; - - queue = tmo->queue; - buf = tmo->buf; - - if (buf != tmo->tmo_buf) - odp_buffer_free(tmo->tmo_buf); +/****************************************************************************** + * POSIX timer support + * Functions that use Linux/POSIX per-process timers and related facilities + *****************************************************************************/ - odp_queue_enq(queue, buf); +static void timer_notify(sigval_t sigval) +{ + odp_timer_pool *tp = (odp_timer_pool *)sigval.sival_ptr; +#ifdef __ARM_ARCH + odp_timer *array = &tp->timers[0]; + uint32_t i; + /* Prefetch initial cache lines (match 32 above) */ + for (i = 0; i < 32; i += ODP_CACHE_LINE_SIZE / sizeof(array[0])) + PREFETCH(&array[i]); +#endif + uint64_t prev_tick = odp_atomic_fetch_inc_u64(&tp->cur_tick); + /* Attempt to acquire the lock, check if the old value was clear */ + if (odp_spinlock_trylock(&tp->itimer_running)) { + /* Scan timer array, looking for timers to expire */ + (void)odp_timer_pool_expire(tp, prev_tick); + odp_spinlock_unlock(&tp->itimer_running); } + /* Else skip scan of timers. cur_tick was updated and next itimer + * invocation will process older expiration ticks as well */ } -static void timer_start(timer_ring_t *timer) +static void itimer_init(odp_timer_pool *tp) { struct sigevent sigev; struct itimerspec ispec; uint64_t res, sec, nsec; - ODP_DBG("\nTimer (%u) starts\n", timer->timer_hdl); + ODP_DBG("Creating POSIX timer for timer pool %s, period %" + PRIu64" ns\n", tp->name, tp->param.res_ns); memset(&sigev, 0, sizeof(sigev)); memset(&ispec, 0, sizeof(ispec)); sigev.sigev_notify = SIGEV_THREAD; - sigev.sigev_notify_function = notify_function; - sigev.sigev_value.sival_ptr = timer; + sigev.sigev_notify_function = timer_notify; + sigev.sigev_value.sival_ptr = tp; - if (timer_create(CLOCK_MONOTONIC, &sigev, &timer->timerid)) { - ODP_DBG("Timer create failed\n"); - return; - } + if (timer_create(CLOCK_MONOTONIC, &sigev, &tp->timerid)) + ODP_ABORT("timer_create() returned error %s\n", + strerror(errno)); - res = timer->resolution_ns; + res = tp->param.res_ns; sec = res / ODP_TIME_SEC; - nsec = res - sec*ODP_TIME_SEC; + nsec = res - sec * ODP_TIME_SEC; ispec.it_interval.tv_sec = (time_t)sec; ispec.it_interval.tv_nsec = (long)nsec; ispec.it_value.tv_sec = (time_t)sec; ispec.it_value.tv_nsec = (long)nsec; - if (timer_settime(timer->timerid, 0, &ispec, NULL)) { - ODP_DBG("Timer set failed\n"); - return; - } - - return; + if (timer_settime(&tp->timerid, 0, &ispec, NULL)) + ODP_ABORT("timer_settime() returned error %s\n", + strerror(errno)); } -int odp_timer_init_global(void) +static void itimer_fini(odp_timer_pool *tp) { - ODP_DBG("Timer init ..."); - - memset(&odp_timer, 0, sizeof(timer_global_t)); - - odp_spinlock_init(&odp_timer.lock); - - ODP_DBG("done\n"); - - return 0; + if (timer_delete(tp->timerid) != 0) + ODP_ABORT("timer_delete() returned error %s\n", + strerror(errno)); } -int odp_timer_disarm_all(void) +/****************************************************************************** + * Public API functions + * Some parameter checks and error messages + * No modificatios of internal state + *****************************************************************************/ +odp_timer_pool_t +odp_timer_pool_create(const char *name, + const odp_timer_pool_param_t *param) { - int timers; - struct itimerspec ispec; + /* Verify that buffer pool can be used for timeouts */ + odp_timer_pool_t tp = odp_timer_pool_new(name, param); + return tp; +} - odp_spinlock_lock(&odp_timer.lock); +void odp_timer_pool_start(void) +{ + /* Nothing to do here, timer pools are started by the create call */ +} - timers = odp_timer.num_timers; +void odp_timer_pool_destroy(odp_timer_pool_t tpid) +{ + odp_timer_pool_del(tpid); +} - ispec.it_interval.tv_sec = 0; - ispec.it_interval.tv_nsec = 0; - ispec.it_value.tv_sec = 0; - ispec.it_value.tv_nsec = 0; +uint64_t odp_timer_tick_to_ns(odp_timer_pool_t tpid, uint64_t ticks) +{ + return ticks * tpid->param.res_ns; +} - for (; timers >= 0; timers--) { - if (timer_settime(odp_timer.timer[timers].timerid, - 0, &ispec, NULL)) { - ODP_DBG("Timer reset failed\n"); - odp_spinlock_unlock(&odp_timer.lock); - return -1; - } - odp_timer.num_timers--; - } +uint64_t odp_timer_ns_to_tick(odp_timer_pool_t tpid, uint64_t ns) +{ + return (uint64_t)(ns / tpid->param.res_ns); +} - odp_spinlock_unlock(&odp_timer.lock); +uint64_t odp_timer_current_tick(odp_timer_pool_t tpid) +{ + /* Relaxed atomic read for lowest overhead */ + return odp_atomic_load_u64(&tpid->cur_tick); +} +int odp_timer_pool_info(odp_timer_pool_t tpid, + odp_timer_pool_info_t *buf) +{ + buf->param = tpid->param; + buf->cur_timers = tpid->num_alloc; + buf->hwm_timers = odp_atomic_load_u32(&tpid->high_wm); + buf->name = tpid->name; return 0; } -odp_timer_t odp_timer_create(const char *name, odp_buffer_pool_t pool, - uint64_t resolution_ns, uint64_t min_ns, - uint64_t max_ns) +odp_timer_t odp_timer_alloc(odp_timer_pool_t tpid, + odp_queue_t queue, + void *user_ptr) { - uint32_t id; - timer_ring_t *timer; - odp_timer_t timer_hdl; - int i; - uint64_t max_ticks; - (void) name; - - if (resolution_ns < MIN_RES) - resolution_ns = MIN_RES; - - if (resolution_ns > MAX_RES) - resolution_ns = MAX_RES; - - max_ticks = max_ns / resolution_ns; - - if (max_ticks > MAX_TICKS) { - ODP_DBG("Maximum timeout too long: %"PRIu64" ticks\n", - max_ticks); - return ODP_TIMER_INVALID; - } - - if (min_ns < resolution_ns) { - ODP_DBG("Min timeout %"PRIu64" ns < resolution %"PRIu64" ns\n", - min_ns, resolution_ns); - return ODP_TIMER_INVALID; - } - - odp_spinlock_lock(&odp_timer.lock); - - if (odp_timer.num_timers >= NUM_TIMERS) { - odp_spinlock_unlock(&odp_timer.lock); - ODP_DBG("All timers allocated\n"); - return ODP_TIMER_INVALID; - } - - for (id = 0; id < NUM_TIMERS; id++) { - if (odp_timer.timer[id].allocated == 0) - break; - } - - timer = &odp_timer.timer[id]; - timer->allocated = 1; - odp_timer.num_timers++; - - odp_spinlock_unlock(&odp_timer.lock); - - timer_hdl = id + 1; - - timer->timer_hdl = timer_hdl; - timer->pool = pool; - timer->resolution_ns = resolution_ns; - timer->max_ticks = MAX_TICKS; - - for (i = 0; i < MAX_TICKS; i++) { - odp_spinlock_init(&timer->tick[i].lock); - timer->tick[i].list = NULL; + if (odp_unlikely(queue == ODP_QUEUE_INVALID)) + ODP_ABORT("%s: Invalid queue handle\n", tpid->name); + /* We don't care about the validity of user_ptr because we will not + * attempt to dereference it */ + odp_timer_t hdl = timer_alloc(tpid, queue, user_ptr); + if (odp_likely(hdl != ODP_TIMER_INVALID)) { + /* Success */ + return hdl; } - - timer->active = 1; - odp_sync_stores(); - - timer_start(timer); - - return timer_hdl; + /* errno set by timer_alloc() */ + return ODP_TIMER_INVALID; } -odp_timer_tmo_t odp_timer_absolute_tmo(odp_timer_t timer_hdl, uint64_t tmo_tick, - odp_queue_t queue, odp_buffer_t buf) +odp_buffer_t odp_timer_free(odp_timer_t hdl) { - int id; - uint64_t tick; - uint64_t cur_tick; - timeout_t *new_tmo; - odp_buffer_t tmo_buf; - odp_timeout_hdr_t *tmo_hdr; - timer_ring_t *timer; - - id = (int)timer_hdl - 1; - timer = &odp_timer.timer[id]; - - cur_tick = timer->cur_tick; - if (tmo_tick <= cur_tick) { - ODP_DBG("timeout too close\n"); - return ODP_TIMER_TMO_INVALID; - } - - if ((tmo_tick - cur_tick) > MAX_TICKS) { - ODP_DBG("timeout too far: cur %"PRIu64" tmo %"PRIu64"\n", - cur_tick, tmo_tick); - return ODP_TIMER_TMO_INVALID; - } - - tick = tmo_tick % MAX_TICKS; - - tmo_buf = odp_buffer_alloc(timer->pool); - if (tmo_buf == ODP_BUFFER_INVALID) { - ODP_DBG("tmo buffer alloc failed\n"); - return ODP_TIMER_TMO_INVALID; - } - - tmo_hdr = odp_timeout_hdr((odp_timeout_t) tmo_buf); - new_tmo = &tmo_hdr->meta; - - new_tmo->timer_id = id; - new_tmo->tick = (int)tick; - new_tmo->tmo_tick = tmo_tick; - new_tmo->queue = queue; - new_tmo->tmo_buf = tmo_buf; + odp_timer_pool *tp = handle_to_tp(hdl); + uint32_t idx = handle_to_idx(hdl, tp); + odp_buffer_t old_buf = timer_free(tp, idx); + return old_buf; +} - if (buf != ODP_BUFFER_INVALID) - new_tmo->buf = buf; +int odp_timer_set_abs(odp_timer_t hdl, + uint64_t abs_tck, + odp_buffer_t *tmo_buf) +{ + odp_timer_pool *tp = handle_to_tp(hdl); + uint32_t idx = handle_to_idx(hdl, tp); + uint64_t cur_tick = odp_atomic_load_u64(&tp->cur_tick); + if (odp_unlikely(abs_tck < cur_tick + tp->min_rel_tck)) + return ODP_TIMER_TOOEARLY; + if (odp_unlikely(abs_tck > cur_tick + tp->max_rel_tck)) + return ODP_TIMER_TOOLATE; + if (timer_reset(idx, abs_tck, tmo_buf, tp)) + return ODP_TIMER_SUCCESS; else - new_tmo->buf = tmo_buf; - - add_tmo(&timer->tick[tick], new_tmo); - - return tmo_buf; + return ODP_TIMER_NOBUF; } -uint64_t odp_timer_tick_to_ns(odp_timer_t timer_hdl, uint64_t ticks) +int odp_timer_set_rel(odp_timer_t hdl, + uint64_t rel_tck, + odp_buffer_t *tmo_buf) { - uint32_t id; - - id = timer_hdl - 1; - return ticks * odp_timer.timer[id].resolution_ns; + odp_timer_pool *tp = handle_to_tp(hdl); + uint32_t idx = handle_to_idx(hdl, tp); + uint64_t abs_tck = odp_atomic_load_u64(&tp->cur_tick) + rel_tck; + if (odp_unlikely(rel_tck < tp->min_rel_tck)) + return ODP_TIMER_TOOEARLY; + if (odp_unlikely(rel_tck > tp->max_rel_tck)) + return ODP_TIMER_TOOLATE; + if (timer_reset(idx, abs_tck, tmo_buf, tp)) + return ODP_TIMER_SUCCESS; + else + return ODP_TIMER_NOBUF; } -uint64_t odp_timer_ns_to_tick(odp_timer_t timer_hdl, uint64_t ns) +int odp_timer_cancel(odp_timer_t hdl, odp_buffer_t *tmo_buf) { - uint32_t id; - - id = timer_hdl - 1; - return ns / odp_timer.timer[id].resolution_ns; + odp_timer_pool *tp = handle_to_tp(hdl); + uint32_t idx = handle_to_idx(hdl, tp); + /* Set the expiration tick of the timer to TMO_INACTIVE */ + odp_buffer_t old_buf = timer_cancel(tp, idx, TMO_INACTIVE); + if (old_buf != ODP_BUFFER_INVALID) { + *tmo_buf = old_buf; + return 0; /* Active timer cancelled, timeout returned */ + } else { + return -1; /* Timer already expired, no timeout returned */ + } } -uint64_t odp_timer_resolution(odp_timer_t timer_hdl) +odp_timeout_t odp_timeout_from_buf(odp_buffer_t buf) { - uint32_t id; - - id = timer_hdl - 1; - return odp_timer.timer[id].resolution_ns; + /* This check not mandated by the API specification */ + if (odp_buffer_type(buf) != ODP_BUFFER_TYPE_TIMEOUT) + ODP_ABORT("Buffer not a timeout"); + return (odp_timeout_t)timeout_hdr_from_buf(buf); } -uint64_t odp_timer_maximum_tmo(odp_timer_t timer_hdl) +int odp_timeout_fresh(odp_timeout_t tmo) { - uint32_t id; - - id = timer_hdl - 1; - return odp_timer.timer[id].max_ticks; + const odp_timeout_hdr_t *hdr = (odp_timeout_hdr_t *)tmo; + odp_timer_t hdl = hdr->timer; + odp_timer_pool *tp = handle_to_tp(hdl); + uint32_t idx = handle_to_idx(hdl, tp); + tick_buf_t *tb = &tp->tick_buf[idx]; + uint64_t exp_tck = odp_atomic_load_u64(&tb->exp_tck); + /* Return true if the timer still has the same expiration tick + * (ignoring the inactive/expired bit) as the timeout */ + return hdr->expiration == (exp_tck & ~TMO_INACTIVE); } -uint64_t odp_timer_current_tick(odp_timer_t timer_hdl) +odp_timer_t odp_timeout_timer(odp_timeout_t tmo) { - uint32_t id; + const odp_timeout_hdr_t *hdr = (odp_timeout_hdr_t *)tmo; + return hdr->timer; +} - id = timer_hdl - 1; - return odp_timer.timer[id].cur_tick; +uint64_t odp_timeout_tick(odp_timeout_t tmo) +{ + const odp_timeout_hdr_t *hdr = (odp_timeout_hdr_t *)tmo; + return hdr->expiration; } -odp_timeout_t odp_timeout_from_buffer(odp_buffer_t buf) +void *odp_timeout_user_ptr(odp_timeout_t tmo) { - return (odp_timeout_t) buf; + const odp_timeout_hdr_t *hdr = (odp_timeout_hdr_t *)tmo; + return hdr->user_ptr; } -uint64_t odp_timeout_tick(odp_timeout_t tmo) +int odp_timer_init_global(void) { - odp_timeout_hdr_t *tmo_hdr = odp_timeout_hdr(tmo); - return tmo_hdr->meta.tmo_tick; +#ifndef ODP_ATOMIC_U128 + uint32_t i; + for (i = 0; i < NUM_LOCKS; i++) + _odp_atomic_flag_clear(&locks[i]); +#else + ODP_DBG("Using lock-less timer implementation\n"); +#endif + odp_atomic_init_u32(&num_timer_pools, 0); + return 0; }