From patchwork Mon Dec 14 15:46:22 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Milard X-Patchwork-Id: 58352 Delivered-To: patch@linaro.org Received: by 10.112.73.68 with SMTP id j4csp1519381lbv; Mon, 14 Dec 2015 06:51:25 -0800 (PST) X-Received: by 10.140.234.22 with SMTP id f22mr20671566qhc.19.1450104685586; Mon, 14 Dec 2015 06:51:25 -0800 (PST) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id h8si35186594qgd.114.2015.12.14.06.51.25; Mon, 14 Dec 2015 06:51:25 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dkim=neutral (body hash did not verify) header.i=@linaro-org.20150623.gappssmtp.com Received: by lists.linaro.org (Postfix, from userid 109) id EE20561CD1; Mon, 14 Dec 2015 14:51:24 +0000 (UTC) Authentication-Results: lists.linaro.org; dkim=fail reason="verification failed; unprotected key" header.d=linaro-org.20150623.gappssmtp.com header.i=@linaro-org.20150623.gappssmtp.com header.b=PVfehOHH; dkim-adsp=none (unprotected policy); dkim-atps=neutral X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, T_DKIM_INVALID, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 26B2261C8B; Mon, 14 Dec 2015 14:47:32 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 8E84D61953; Mon, 14 Dec 2015 14:47:20 +0000 (UTC) Received: from mail-lf0-f44.google.com (mail-lf0-f44.google.com [209.85.215.44]) by lists.linaro.org (Postfix) with ESMTPS id 413146194A for ; Mon, 14 Dec 2015 14:47:14 +0000 (UTC) Received: by lfap203 with SMTP id p203so53320442lfa.0 for ; Mon, 14 Dec 2015 06:47:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=4nZ/ZVV+A0et5jQLvxsYuhBY9Zdk6F0YJIPq0nF/UZc=; b=PVfehOHH42dsLXa5L0BmUgfr5EXp8koQ0tzcptZEF7K5XdProIZX1ROr+4sSodghJL QuMc5SianvF1qY+TkNFukp2ROvo7n24EO0FCzKt2PTdjdih+AIE7zij+a4LZvZlYPqeb CpTyM/Uu4vkLHN3oDdqkiIPe9Fh3QO4c8n/t0rCAX5jH+QbDYvFY7ZZnsJ0+xrojODYb F6RoTPRFBbRAvDXqS3PEgOTEzYlrJayk1oIz35RSG8O09E8OIBTdEQrrW7Mf9G3O+0zK 00p/s2Li9iShqmp+2OQRLoKWuUAOiEWJPgCXRjau9siyZ4HJLjcTChuJxXaGsQhSovjw OFlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=4nZ/ZVV+A0et5jQLvxsYuhBY9Zdk6F0YJIPq0nF/UZc=; b=f9ngUEFRm8WCL54zMMeC/m2CizYyGcuq/P6Dl1KaWIMMGrpzP5Jdyc2k8sFbV9ElTn 500jKfP3GoybiyFZKrTWxvIxeF9UQILRBiq0RofDRMIy6A5rPVg68vT03UzexfNGqieb QAIVilP9fYGrdaZ0reKfC5mxUmWPiI9O9+b1Vu+btdsYbCxvh6PzXcbcMV6lD6dXpGe1 9mpwDkj3mlYRB/5yWM/BCMgq/5DTqU3LAsx75RNDVV/ZBQ7ZmXLyg20CR1VdFhfFXMQ2 f2+/YwvEQup1E9Dl/xowEhQv1okxIPykZRzYvkPaptokD0GMkuVsn027VOBj0cl9UYQh tSzA== X-Gm-Message-State: ALoCoQkfcItY//SnHOOf+v2+i1U9+CmLs8K2OYI882GznnwmpIosaMRnAc7mUkj5S5KWHsD+q8lIdLUwD0sArI2+hacZ7n2Thw== X-Received: by 10.25.78.10 with SMTP id c10mr2169721lfb.18.1450104433039; Mon, 14 Dec 2015 06:47:13 -0800 (PST) Received: from erachmi-ericsson.ki.sw.ericsson.se (c-83-233-90-46.cust.bredband2.com. [83.233.90.46]) by smtp.gmail.com with ESMTPSA id b13sm5653856lfb.30.2015.12.14.06.47.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 14 Dec 2015 06:47:12 -0800 (PST) From: Christophe Milard To: anders.roxell@linaro.org, mike.holmes@linaro.org, stuart.haslam@linaro.org Date: Mon, 14 Dec 2015 16:46:22 +0100 Message-Id: <1450107982-30298-2-git-send-email-christophe.milard@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1450107982-30298-1-git-send-email-christophe.milard@linaro.org> References: <1450107982-30298-1-git-send-email-christophe.milard@linaro.org> X-Topics: patch Cc: lng-odp@lists.linaro.org Subject: [lng-odp] [PATCH 2/2] validation: removing synchronizers tests X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" Now redundant with atomic, barrier and lock tests. Signed-off-by: Christophe Milard --- configure.ac | 1 - platform/linux-generic/test/Makefile.am | 1 - test/validation/Makefile.am | 1 - test/validation/synchronizers/.gitignore | 1 - test/validation/synchronizers/Makefile.am | 10 - test/validation/synchronizers/synchronizers.c | 1626 -------------------- test/validation/synchronizers/synchronizers.h | 53 - test/validation/synchronizers/synchronizers_main.c | 12 - 8 files changed, 1705 deletions(-) delete mode 100644 test/validation/synchronizers/.gitignore delete mode 100644 test/validation/synchronizers/Makefile.am delete mode 100644 test/validation/synchronizers/synchronizers.c delete mode 100644 test/validation/synchronizers/synchronizers.h delete mode 100644 test/validation/synchronizers/synchronizers_main.c diff --git a/configure.ac b/configure.ac index 7a05574..9b3ecd7 100644 --- a/configure.ac +++ b/configure.ac @@ -368,7 +368,6 @@ AC_CONFIG_FILES([Makefile test/validation/random/Makefile test/validation/scheduler/Makefile test/validation/std_clib/Makefile - test/validation/synchronizers/Makefile test/validation/thread/Makefile test/validation/time/Makefile test/validation/timer/Makefile diff --git a/platform/linux-generic/test/Makefile.am b/platform/linux-generic/test/Makefile.am index aa246d2..db923b8 100644 --- a/platform/linux-generic/test/Makefile.am +++ b/platform/linux-generic/test/Makefile.am @@ -25,7 +25,6 @@ TESTS = pktio/pktio_run \ ${top_builddir}/test/validation/random/random_main$(EXEEXT) \ ${top_builddir}/test/validation/scheduler/scheduler_main$(EXEEXT) \ ${top_builddir}/test/validation/std_clib/std_clib_main$(EXEEXT) \ - ${top_builddir}/test/validation/synchronizers/synchronizers_main$(EXEEXT) \ ${top_builddir}/test/validation/thread/thread_main$(EXEEXT) \ ${top_builddir}/test/validation/time/time_main$(EXEEXT) \ ${top_builddir}/test/validation/timer/timer_main$(EXEEXT) \ diff --git a/test/validation/Makefile.am b/test/validation/Makefile.am index 9a5bbff..90d32ea 100644 --- a/test/validation/Makefile.am +++ b/test/validation/Makefile.am @@ -16,7 +16,6 @@ ODP_MODULES = atomic \ random \ scheduler \ std_clib \ - synchronizers \ thread \ time \ timer \ diff --git a/test/validation/synchronizers/.gitignore b/test/validation/synchronizers/.gitignore deleted file mode 100644 index 6aad9df..0000000 --- a/test/validation/synchronizers/.gitignore +++ /dev/null @@ -1 +0,0 @@ -synchronizers_main diff --git a/test/validation/synchronizers/Makefile.am b/test/validation/synchronizers/Makefile.am deleted file mode 100644 index dd504d5..0000000 --- a/test/validation/synchronizers/Makefile.am +++ /dev/null @@ -1,10 +0,0 @@ -include ../Makefile.inc - -noinst_LTLIBRARIES = libtestsynchronizers.la -libtestsynchronizers_la_SOURCES = synchronizers.c - -test_PROGRAMS = synchronizers_main$(EXEEXT) -dist_synchronizers_main_SOURCES = synchronizers_main.c -synchronizers_main_LDADD = libtestsynchronizers.la $(LIBCUNIT_COMMON) $(LIBODP) - -EXTRA_DIST = synchronizers.h diff --git a/test/validation/synchronizers/synchronizers.c b/test/validation/synchronizers/synchronizers.c deleted file mode 100644 index cebe0d2..0000000 --- a/test/validation/synchronizers/synchronizers.c +++ /dev/null @@ -1,1626 +0,0 @@ -/* Copyright (c) 2014, Linaro Limited - * All rights reserved. - * - * SPDX-License-Identifier: BSD-3-Clause - */ - -#include -#include -#include -#include -#include -#include "synchronizers.h" - -#define VERBOSE 0 -#define MAX_ITERATIONS 1000 -#define BARRIER_ITERATIONS 64 - -#define SLOW_BARRIER_DELAY 400 -#define BASE_DELAY 6 -#define MIN_DELAY 1 - -#define NUM_TEST_BARRIERS BARRIER_ITERATIONS -#define NUM_RESYNC_BARRIERS 100 - -#define ADD_SUB_CNT 5 - -#define CNT 10 -#define BARRIER_DELAY 10 -#define U32_INIT_VAL (1UL << 10) -#define U64_INIT_VAL (1ULL << 33) - -#define GLOBAL_SHM_NAME "GlobalLockTest" - -#define UNUSED __attribute__((__unused__)) - -static odp_atomic_u32_t a32u; -static odp_atomic_u64_t a64u; - -typedef __volatile uint32_t volatile_u32_t; -typedef __volatile uint64_t volatile_u64_t; - -typedef struct { - odp_atomic_u32_t wait_cnt; -} custom_barrier_t; - -typedef struct { - /* Global variables */ - uint32_t g_num_threads; - uint32_t g_iterations; - uint32_t g_verbose; - uint32_t g_max_num_cores; - - odp_barrier_t test_barriers[NUM_TEST_BARRIERS]; - custom_barrier_t custom_barrier1[NUM_TEST_BARRIERS]; - custom_barrier_t custom_barrier2[NUM_TEST_BARRIERS]; - volatile_u32_t slow_thread_num; - volatile_u32_t barrier_cnt1; - volatile_u32_t barrier_cnt2; - odp_barrier_t global_barrier; - - /* Used to periodically resync within the lock functional tests */ - odp_barrier_t barrier_array[NUM_RESYNC_BARRIERS]; - - /* Locks */ - odp_spinlock_t global_spinlock; - odp_spinlock_recursive_t global_recursive_spinlock; - odp_ticketlock_t global_ticketlock; - odp_rwlock_t global_rwlock; - odp_rwlock_recursive_t global_recursive_rwlock; - - volatile_u32_t global_lock_owner; -} global_shared_mem_t; - -/* Per-thread memory */ -typedef struct { - global_shared_mem_t *global_mem; - - int thread_id; - int thread_core; - - odp_spinlock_t per_thread_spinlock; - odp_spinlock_recursive_t per_thread_recursive_spinlock; - odp_ticketlock_t per_thread_ticketlock; - odp_rwlock_t per_thread_rwlock; - odp_rwlock_recursive_t per_thread_recursive_rwlock; - - volatile_u64_t delay_counter; -} per_thread_mem_t; - -static odp_shm_t global_shm; -static global_shared_mem_t *global_mem; - -/* -* Delay a consistent amount of time. Ideally the amount of CPU time taken -* is linearly proportional to "iterations". The goal is to try to do some -* work that the compiler optimizer won't optimize away, and also to -* minimize loads and stores (at least to different memory addresses) -* so as to not affect or be affected by caching issues. This does NOT have to -* correlate to a specific number of cpu cycles or be consistent across -* CPU architectures. -*/ -static void thread_delay(per_thread_mem_t *per_thread_mem, uint32_t iterations) -{ - volatile_u64_t *counter_ptr; - uint32_t cnt; - - counter_ptr = &per_thread_mem->delay_counter; - - for (cnt = 1; cnt <= iterations; cnt++) - (*counter_ptr)++; -} - -/* Initialise per-thread memory */ -static per_thread_mem_t *thread_init(void) -{ - global_shared_mem_t *global_mem; - per_thread_mem_t *per_thread_mem; - odp_shm_t global_shm; - uint32_t per_thread_mem_len; - - per_thread_mem_len = sizeof(per_thread_mem_t); - per_thread_mem = malloc(per_thread_mem_len); - memset(per_thread_mem, 0, per_thread_mem_len); - - per_thread_mem->delay_counter = 1; - - per_thread_mem->thread_id = odp_thread_id(); - per_thread_mem->thread_core = odp_cpu_id(); - - global_shm = odp_shm_lookup(GLOBAL_SHM_NAME); - global_mem = odp_shm_addr(global_shm); - CU_ASSERT_PTR_NOT_NULL(global_mem); - - per_thread_mem->global_mem = global_mem; - - return per_thread_mem; -} - -static void thread_finalize(per_thread_mem_t *per_thread_mem) -{ - free(per_thread_mem); -} - -static void custom_barrier_init(custom_barrier_t *custom_barrier, - uint32_t num_threads) -{ - odp_atomic_init_u32(&custom_barrier->wait_cnt, num_threads); -} - -static void custom_barrier_wait(custom_barrier_t *custom_barrier) -{ - volatile_u64_t counter = 1; - uint32_t delay_cnt, wait_cnt; - - odp_atomic_sub_u32(&custom_barrier->wait_cnt, 1); - - wait_cnt = 1; - while (wait_cnt != 0) { - for (delay_cnt = 1; delay_cnt <= BARRIER_DELAY; delay_cnt++) - counter++; - - wait_cnt = odp_atomic_load_u32(&custom_barrier->wait_cnt); - } -} - -static uint32_t barrier_test(per_thread_mem_t *per_thread_mem, - odp_bool_t no_barrier_test) -{ - global_shared_mem_t *global_mem; - uint32_t barrier_errs, iterations, cnt, i_am_slow_thread; - uint32_t thread_num, slow_thread_num, next_slow_thread, num_threads; - uint32_t lock_owner_delay, barrier_cnt1, barrier_cnt2; - - thread_num = odp_thread_id(); - global_mem = per_thread_mem->global_mem; - num_threads = global_mem->g_num_threads; - iterations = BARRIER_ITERATIONS; - - barrier_errs = 0; - lock_owner_delay = SLOW_BARRIER_DELAY; - - for (cnt = 1; cnt < iterations; cnt++) { - /* Wait here until all of the threads reach this point */ - custom_barrier_wait(&global_mem->custom_barrier1[cnt]); - - barrier_cnt1 = global_mem->barrier_cnt1; - barrier_cnt2 = global_mem->barrier_cnt2; - - if ((barrier_cnt1 != cnt) || (barrier_cnt2 != cnt)) { - printf("thread_num=%" PRIu32 " barrier_cnts of %" PRIu32 - " %" PRIu32 " cnt=%" PRIu32 "\n", - thread_num, barrier_cnt1, barrier_cnt2, cnt); - barrier_errs++; - } - - /* Wait here until all of the threads reach this point */ - custom_barrier_wait(&global_mem->custom_barrier2[cnt]); - - slow_thread_num = global_mem->slow_thread_num; - i_am_slow_thread = thread_num == slow_thread_num; - next_slow_thread = slow_thread_num + 1; - if (num_threads < next_slow_thread) - next_slow_thread = 1; - - /* - * Now run the test, which involves having all but one thread - * immediately calling odp_barrier_wait(), and one thread wait a - * moderate amount of time and then calling odp_barrier_wait(). - * The test fails if any of the first group of threads - * has not waited for the "slow" thread. The "slow" thread is - * responsible for re-initializing the barrier for next trial. - */ - if (i_am_slow_thread) { - thread_delay(per_thread_mem, lock_owner_delay); - lock_owner_delay += BASE_DELAY; - if ((global_mem->barrier_cnt1 != cnt) || - (global_mem->barrier_cnt2 != cnt) || - (global_mem->slow_thread_num - != slow_thread_num)) - barrier_errs++; - } - - if (no_barrier_test == 0) - odp_barrier_wait(&global_mem->test_barriers[cnt]); - - global_mem->barrier_cnt1 = cnt + 1; - odp_sync_stores(); - - if (i_am_slow_thread) { - global_mem->slow_thread_num = next_slow_thread; - global_mem->barrier_cnt2 = cnt + 1; - odp_sync_stores(); - } else { - while (global_mem->barrier_cnt2 != (cnt + 1)) - thread_delay(per_thread_mem, BASE_DELAY); - } - } - - if ((global_mem->g_verbose) && (barrier_errs != 0)) - printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 - " barrier_errs in %" PRIu32 " iterations\n", thread_num, - per_thread_mem->thread_id, - per_thread_mem->thread_core, barrier_errs, iterations); - - return barrier_errs; -} - -static void *no_barrier_functional_test(void *arg UNUSED) -{ - per_thread_mem_t *per_thread_mem; - uint32_t barrier_errs; - - per_thread_mem = thread_init(); - barrier_errs = barrier_test(per_thread_mem, 1); - - /* - * Note that the following CU_ASSERT MAY appear incorrect, but for the - * no_barrier test it should see barrier_errs or else there is something - * wrong with the test methodology or the ODP thread implementation. - * So this test PASSES only if it sees barrier_errs or a single - * worker was used. - */ - CU_ASSERT(barrier_errs != 0 || global_mem->g_num_threads == 1); - thread_finalize(per_thread_mem); - - return NULL; -} - -static void *barrier_functional_test(void *arg UNUSED) -{ - per_thread_mem_t *per_thread_mem; - uint32_t barrier_errs; - - per_thread_mem = thread_init(); - barrier_errs = barrier_test(per_thread_mem, 0); - - CU_ASSERT(barrier_errs == 0); - thread_finalize(per_thread_mem); - - return NULL; -} - -static void spinlock_api_test(odp_spinlock_t *spinlock) -{ - odp_spinlock_init(spinlock); - CU_ASSERT(odp_spinlock_is_locked(spinlock) == 0); - - odp_spinlock_lock(spinlock); - CU_ASSERT(odp_spinlock_is_locked(spinlock) == 1); - - odp_spinlock_unlock(spinlock); - CU_ASSERT(odp_spinlock_is_locked(spinlock) == 0); - - CU_ASSERT(odp_spinlock_trylock(spinlock) == 1); - - CU_ASSERT(odp_spinlock_is_locked(spinlock) == 1); - - odp_spinlock_unlock(spinlock); - CU_ASSERT(odp_spinlock_is_locked(spinlock) == 0); -} - -static void *spinlock_api_tests(void *arg UNUSED) -{ - global_shared_mem_t *global_mem; - per_thread_mem_t *per_thread_mem; - odp_spinlock_t local_spin_lock; - - per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - - odp_barrier_wait(&global_mem->global_barrier); - - spinlock_api_test(&local_spin_lock); - spinlock_api_test(&per_thread_mem->per_thread_spinlock); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void spinlock_recursive_api_test(odp_spinlock_recursive_t *spinlock) -{ - odp_spinlock_recursive_init(spinlock); - CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 0); - - odp_spinlock_recursive_lock(spinlock); - CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 1); - - odp_spinlock_recursive_lock(spinlock); - CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 1); - - odp_spinlock_recursive_unlock(spinlock); - CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 1); - - odp_spinlock_recursive_unlock(spinlock); - CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 0); - - CU_ASSERT(odp_spinlock_recursive_trylock(spinlock) == 1); - CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 1); - - CU_ASSERT(odp_spinlock_recursive_trylock(spinlock) == 1); - CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 1); - - odp_spinlock_recursive_unlock(spinlock); - CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 1); - - odp_spinlock_recursive_unlock(spinlock); - CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 0); -} - -static void *spinlock_recursive_api_tests(void *arg UNUSED) -{ - global_shared_mem_t *global_mem; - per_thread_mem_t *per_thread_mem; - odp_spinlock_recursive_t local_recursive_spin_lock; - - per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - - odp_barrier_wait(&global_mem->global_barrier); - - spinlock_recursive_api_test(&local_recursive_spin_lock); - spinlock_recursive_api_test( - &per_thread_mem->per_thread_recursive_spinlock); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void ticketlock_api_test(odp_ticketlock_t *ticketlock) -{ - odp_ticketlock_init(ticketlock); - CU_ASSERT(odp_ticketlock_is_locked(ticketlock) == 0); - - odp_ticketlock_lock(ticketlock); - CU_ASSERT(odp_ticketlock_is_locked(ticketlock) == 1); - - odp_ticketlock_unlock(ticketlock); - CU_ASSERT(odp_ticketlock_is_locked(ticketlock) == 0); - - CU_ASSERT(odp_ticketlock_trylock(ticketlock) == 1); - CU_ASSERT(odp_ticketlock_trylock(ticketlock) == 0); - CU_ASSERT(odp_ticketlock_is_locked(ticketlock) == 1); - - odp_ticketlock_unlock(ticketlock); - CU_ASSERT(odp_ticketlock_is_locked(ticketlock) == 0); -} - -static void *ticketlock_api_tests(void *arg UNUSED) -{ - global_shared_mem_t *global_mem; - per_thread_mem_t *per_thread_mem; - odp_ticketlock_t local_ticket_lock; - - per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - - odp_barrier_wait(&global_mem->global_barrier); - - ticketlock_api_test(&local_ticket_lock); - ticketlock_api_test(&per_thread_mem->per_thread_ticketlock); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void rwlock_api_test(odp_rwlock_t *rw_lock) -{ - odp_rwlock_init(rw_lock); - /* CU_ASSERT(odp_rwlock_is_locked(rw_lock) == 0); */ - - odp_rwlock_read_lock(rw_lock); - odp_rwlock_read_unlock(rw_lock); - - odp_rwlock_write_lock(rw_lock); - /* CU_ASSERT(odp_rwlock_is_locked(rw_lock) == 1); */ - - odp_rwlock_write_unlock(rw_lock); - /* CU_ASSERT(odp_rwlock_is_locked(rw_lock) == 0); */ -} - -static void *rwlock_api_tests(void *arg UNUSED) -{ - global_shared_mem_t *global_mem; - per_thread_mem_t *per_thread_mem; - odp_rwlock_t local_rwlock; - - per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - - odp_barrier_wait(&global_mem->global_barrier); - - rwlock_api_test(&local_rwlock); - rwlock_api_test(&per_thread_mem->per_thread_rwlock); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void rwlock_recursive_api_test(odp_rwlock_recursive_t *rw_lock) -{ - odp_rwlock_recursive_init(rw_lock); - /* CU_ASSERT(odp_rwlock_is_locked(rw_lock) == 0); */ - - odp_rwlock_recursive_read_lock(rw_lock); - odp_rwlock_recursive_read_lock(rw_lock); - - odp_rwlock_recursive_read_unlock(rw_lock); - odp_rwlock_recursive_read_unlock(rw_lock); - - odp_rwlock_recursive_write_lock(rw_lock); - odp_rwlock_recursive_write_lock(rw_lock); - /* CU_ASSERT(odp_rwlock_is_locked(rw_lock) == 1); */ - - odp_rwlock_recursive_write_unlock(rw_lock); - odp_rwlock_recursive_write_unlock(rw_lock); - /* CU_ASSERT(odp_rwlock_is_locked(rw_lock) == 0); */ -} - -static void *rwlock_recursive_api_tests(void *arg UNUSED) -{ - global_shared_mem_t *global_mem; - per_thread_mem_t *per_thread_mem; - odp_rwlock_recursive_t local_recursive_rwlock; - - per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - - odp_barrier_wait(&global_mem->global_barrier); - - rwlock_recursive_api_test(&local_recursive_rwlock); - rwlock_recursive_api_test(&per_thread_mem->per_thread_recursive_rwlock); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void *no_lock_functional_test(void *arg UNUSED) -{ - global_shared_mem_t *global_mem; - per_thread_mem_t *per_thread_mem; - uint32_t thread_num, resync_cnt, rs_idx, iterations, cnt; - uint32_t sync_failures, current_errs, lock_owner_delay; - - thread_num = odp_cpu_id() + 1; - per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - iterations = global_mem->g_iterations; - - odp_barrier_wait(&global_mem->global_barrier); - - sync_failures = 0; - current_errs = 0; - rs_idx = 0; - resync_cnt = iterations / NUM_RESYNC_BARRIERS; - lock_owner_delay = BASE_DELAY; - - for (cnt = 1; cnt <= iterations; cnt++) { - global_mem->global_lock_owner = thread_num; - odp_sync_stores(); - thread_delay(per_thread_mem, lock_owner_delay); - - if (global_mem->global_lock_owner != thread_num) { - current_errs++; - sync_failures++; - } - - global_mem->global_lock_owner = 0; - odp_sync_stores(); - thread_delay(per_thread_mem, MIN_DELAY); - - if (global_mem->global_lock_owner == thread_num) { - current_errs++; - sync_failures++; - } - - if (current_errs == 0) - lock_owner_delay++; - - /* Wait a small amount of time and rerun the test */ - thread_delay(per_thread_mem, BASE_DELAY); - - /* Try to resync all of the threads to increase contention */ - if ((rs_idx < NUM_RESYNC_BARRIERS) && - ((cnt % resync_cnt) == (resync_cnt - 1))) - odp_barrier_wait(&global_mem->barrier_array[rs_idx++]); - } - - if (global_mem->g_verbose) - printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 - " sync_failures in %" PRIu32 " iterations\n", - thread_num, - per_thread_mem->thread_id, - per_thread_mem->thread_core, - sync_failures, iterations); - - /* Note that the following CU_ASSERT MAY appear incorrect, but for the - * no_lock test it should see sync_failures or else there is something - * wrong with the test methodology or the ODP thread implementation. - * So this test PASSES only if it sees sync_failures or a single - * worker was used. - */ - CU_ASSERT(sync_failures != 0 || global_mem->g_num_threads == 1); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void *spinlock_functional_test(void *arg UNUSED) -{ - global_shared_mem_t *global_mem; - per_thread_mem_t *per_thread_mem; - uint32_t thread_num, resync_cnt, rs_idx, iterations, cnt; - uint32_t sync_failures, is_locked_errs, current_errs; - uint32_t lock_owner_delay; - - thread_num = odp_cpu_id() + 1; - per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - iterations = global_mem->g_iterations; - - odp_barrier_wait(&global_mem->global_barrier); - - sync_failures = 0; - is_locked_errs = 0; - current_errs = 0; - rs_idx = 0; - resync_cnt = iterations / NUM_RESYNC_BARRIERS; - lock_owner_delay = BASE_DELAY; - - for (cnt = 1; cnt <= iterations; cnt++) { - /* Acquire the shared global lock */ - odp_spinlock_lock(&global_mem->global_spinlock); - - /* Make sure we have the lock AND didn't previously own it */ - if (odp_spinlock_is_locked(&global_mem->global_spinlock) != 1) - is_locked_errs++; - - if (global_mem->global_lock_owner != 0) { - current_errs++; - sync_failures++; - } - - /* Now set the global_lock_owner to be us, wait a while, and - * then we see if anyone else has snuck in and changed the - * global_lock_owner to be themselves - */ - global_mem->global_lock_owner = thread_num; - odp_sync_stores(); - thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != thread_num) { - current_errs++; - sync_failures++; - } - - /* Release shared lock, and make sure we no longer have it */ - global_mem->global_lock_owner = 0; - odp_sync_stores(); - odp_spinlock_unlock(&global_mem->global_spinlock); - if (global_mem->global_lock_owner == thread_num) { - current_errs++; - sync_failures++; - } - - if (current_errs == 0) - lock_owner_delay++; - - /* Wait a small amount of time and rerun the test */ - thread_delay(per_thread_mem, BASE_DELAY); - - /* Try to resync all of the threads to increase contention */ - if ((rs_idx < NUM_RESYNC_BARRIERS) && - ((cnt % resync_cnt) == (resync_cnt - 1))) - odp_barrier_wait(&global_mem->barrier_array[rs_idx++]); - } - - if ((global_mem->g_verbose) && - ((sync_failures != 0) || (is_locked_errs != 0))) - printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 - " sync_failures and %" PRIu32 - " is_locked_errs in %" PRIu32 - " iterations\n", thread_num, - per_thread_mem->thread_id, per_thread_mem->thread_core, - sync_failures, is_locked_errs, iterations); - - CU_ASSERT(sync_failures == 0); - CU_ASSERT(is_locked_errs == 0); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void *spinlock_recursive_functional_test(void *arg UNUSED) -{ - global_shared_mem_t *global_mem; - per_thread_mem_t *per_thread_mem; - uint32_t thread_num, resync_cnt, rs_idx, iterations, cnt; - uint32_t sync_failures, recursive_errs, is_locked_errs, current_errs; - uint32_t lock_owner_delay; - - thread_num = odp_cpu_id() + 1; - per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - iterations = global_mem->g_iterations; - - odp_barrier_wait(&global_mem->global_barrier); - - sync_failures = 0; - recursive_errs = 0; - is_locked_errs = 0; - current_errs = 0; - rs_idx = 0; - resync_cnt = iterations / NUM_RESYNC_BARRIERS; - lock_owner_delay = BASE_DELAY; - - for (cnt = 1; cnt <= iterations; cnt++) { - /* Acquire the shared global lock */ - odp_spinlock_recursive_lock( - &global_mem->global_recursive_spinlock); - - /* Make sure we have the lock AND didn't previously own it */ - if (odp_spinlock_recursive_is_locked( - &global_mem->global_recursive_spinlock) != 1) - is_locked_errs++; - - if (global_mem->global_lock_owner != 0) { - current_errs++; - sync_failures++; - } - - /* Now set the global_lock_owner to be us, wait a while, and - * then we see if anyone else has snuck in and changed the - * global_lock_owner to be themselves - */ - global_mem->global_lock_owner = thread_num; - odp_sync_stores(); - thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != thread_num) { - current_errs++; - sync_failures++; - } - - /* Verify that we can acquire the lock recursively */ - odp_spinlock_recursive_lock( - &global_mem->global_recursive_spinlock); - if (global_mem->global_lock_owner != thread_num) { - current_errs++; - recursive_errs++; - } - - /* Release the lock and verify that we still have it*/ - odp_spinlock_recursive_unlock( - &global_mem->global_recursive_spinlock); - thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != thread_num) { - current_errs++; - recursive_errs++; - } - - /* Release shared lock, and make sure we no longer have it */ - global_mem->global_lock_owner = 0; - odp_sync_stores(); - odp_spinlock_recursive_unlock( - &global_mem->global_recursive_spinlock); - if (global_mem->global_lock_owner == thread_num) { - current_errs++; - sync_failures++; - } - - if (current_errs == 0) - lock_owner_delay++; - - /* Wait a small amount of time and rerun the test */ - thread_delay(per_thread_mem, BASE_DELAY); - - /* Try to resync all of the threads to increase contention */ - if ((rs_idx < NUM_RESYNC_BARRIERS) && - ((cnt % resync_cnt) == (resync_cnt - 1))) - odp_barrier_wait(&global_mem->barrier_array[rs_idx++]); - } - - if ((global_mem->g_verbose) && - (sync_failures != 0 || recursive_errs != 0 || is_locked_errs != 0)) - printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 - " sync_failures and %" PRIu32 - " recursive_errs and %" PRIu32 - " is_locked_errs in %" PRIu32 - " iterations\n", thread_num, - per_thread_mem->thread_id, per_thread_mem->thread_core, - sync_failures, recursive_errs, is_locked_errs, - iterations); - - CU_ASSERT(sync_failures == 0); - CU_ASSERT(recursive_errs == 0); - CU_ASSERT(is_locked_errs == 0); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void *ticketlock_functional_test(void *arg UNUSED) -{ - global_shared_mem_t *global_mem; - per_thread_mem_t *per_thread_mem; - uint32_t thread_num, resync_cnt, rs_idx, iterations, cnt; - uint32_t sync_failures, is_locked_errs, current_errs; - uint32_t lock_owner_delay; - - thread_num = odp_cpu_id() + 1; - per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - iterations = global_mem->g_iterations; - - /* Wait here until all of the threads have also reached this point */ - odp_barrier_wait(&global_mem->global_barrier); - - sync_failures = 0; - is_locked_errs = 0; - current_errs = 0; - rs_idx = 0; - resync_cnt = iterations / NUM_RESYNC_BARRIERS; - lock_owner_delay = BASE_DELAY; - - for (cnt = 1; cnt <= iterations; cnt++) { - /* Acquire the shared global lock */ - odp_ticketlock_lock(&global_mem->global_ticketlock); - - /* Make sure we have the lock AND didn't previously own it */ - if (odp_ticketlock_is_locked(&global_mem->global_ticketlock) - != 1) - is_locked_errs++; - - if (global_mem->global_lock_owner != 0) { - current_errs++; - sync_failures++; - } - - /* Now set the global_lock_owner to be us, wait a while, and - * then we see if anyone else has snuck in and changed the - * global_lock_owner to be themselves - */ - global_mem->global_lock_owner = thread_num; - odp_sync_stores(); - thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != thread_num) { - current_errs++; - sync_failures++; - } - - /* Release shared lock, and make sure we no longer have it */ - global_mem->global_lock_owner = 0; - odp_sync_stores(); - odp_ticketlock_unlock(&global_mem->global_ticketlock); - if (global_mem->global_lock_owner == thread_num) { - current_errs++; - sync_failures++; - } - - if (current_errs == 0) - lock_owner_delay++; - - /* Wait a small amount of time and then rerun the test */ - thread_delay(per_thread_mem, BASE_DELAY); - - /* Try to resync all of the threads to increase contention */ - if ((rs_idx < NUM_RESYNC_BARRIERS) && - ((cnt % resync_cnt) == (resync_cnt - 1))) - odp_barrier_wait(&global_mem->barrier_array[rs_idx++]); - } - - if ((global_mem->g_verbose) && - ((sync_failures != 0) || (is_locked_errs != 0))) - printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 - " sync_failures and %" PRIu32 - " is_locked_errs in %" PRIu32 " iterations\n", - thread_num, - per_thread_mem->thread_id, per_thread_mem->thread_core, - sync_failures, is_locked_errs, iterations); - - CU_ASSERT(sync_failures == 0); - CU_ASSERT(is_locked_errs == 0); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void *rwlock_functional_test(void *arg UNUSED) -{ - global_shared_mem_t *global_mem; - per_thread_mem_t *per_thread_mem; - uint32_t thread_num, resync_cnt, rs_idx, iterations, cnt; - uint32_t sync_failures, current_errs, lock_owner_delay; - - thread_num = odp_cpu_id() + 1; - per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - iterations = global_mem->g_iterations; - - /* Wait here until all of the threads have also reached this point */ - odp_barrier_wait(&global_mem->global_barrier); - - sync_failures = 0; - current_errs = 0; - rs_idx = 0; - resync_cnt = iterations / NUM_RESYNC_BARRIERS; - lock_owner_delay = BASE_DELAY; - - for (cnt = 1; cnt <= iterations; cnt++) { - /* Verify that we can obtain a read lock */ - odp_rwlock_read_lock(&global_mem->global_rwlock); - - /* Verify lock is unowned (no writer holds it) */ - thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != 0) { - current_errs++; - sync_failures++; - } - - /* Release the read lock */ - odp_rwlock_read_unlock(&global_mem->global_rwlock); - - /* Acquire the shared global lock */ - odp_rwlock_write_lock(&global_mem->global_rwlock); - - /* Make sure we have lock now AND didn't previously own it */ - if (global_mem->global_lock_owner != 0) { - current_errs++; - sync_failures++; - } - - /* Now set the global_lock_owner to be us, wait a while, and - * then we see if anyone else has snuck in and changed the - * global_lock_owner to be themselves - */ - global_mem->global_lock_owner = thread_num; - odp_sync_stores(); - thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != thread_num) { - current_errs++; - sync_failures++; - } - - /* Release shared lock, and make sure we no longer have it */ - global_mem->global_lock_owner = 0; - odp_sync_stores(); - odp_rwlock_write_unlock(&global_mem->global_rwlock); - if (global_mem->global_lock_owner == thread_num) { - current_errs++; - sync_failures++; - } - - if (current_errs == 0) - lock_owner_delay++; - - /* Wait a small amount of time and then rerun the test */ - thread_delay(per_thread_mem, BASE_DELAY); - - /* Try to resync all of the threads to increase contention */ - if ((rs_idx < NUM_RESYNC_BARRIERS) && - ((cnt % resync_cnt) == (resync_cnt - 1))) - odp_barrier_wait(&global_mem->barrier_array[rs_idx++]); - } - - if ((global_mem->g_verbose) && (sync_failures != 0)) - printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 - " sync_failures in %" PRIu32 " iterations\n", thread_num, - per_thread_mem->thread_id, - per_thread_mem->thread_core, - sync_failures, iterations); - - CU_ASSERT(sync_failures == 0); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void *rwlock_recursive_functional_test(void *arg UNUSED) -{ - global_shared_mem_t *global_mem; - per_thread_mem_t *per_thread_mem; - uint32_t thread_num, resync_cnt, rs_idx, iterations, cnt; - uint32_t sync_failures, recursive_errs, current_errs, lock_owner_delay; - - thread_num = odp_cpu_id() + 1; - per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - iterations = global_mem->g_iterations; - - /* Wait here until all of the threads have also reached this point */ - odp_barrier_wait(&global_mem->global_barrier); - - sync_failures = 0; - recursive_errs = 0; - current_errs = 0; - rs_idx = 0; - resync_cnt = iterations / NUM_RESYNC_BARRIERS; - lock_owner_delay = BASE_DELAY; - - for (cnt = 1; cnt <= iterations; cnt++) { - /* Verify that we can obtain a read lock */ - odp_rwlock_recursive_read_lock( - &global_mem->global_recursive_rwlock); - - /* Verify lock is unowned (no writer holds it) */ - thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != 0) { - current_errs++; - sync_failures++; - } - - /* Verify we can get read lock recursively */ - odp_rwlock_recursive_read_lock( - &global_mem->global_recursive_rwlock); - - /* Verify lock is unowned (no writer holds it) */ - thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != 0) { - current_errs++; - sync_failures++; - } - - /* Release the read lock */ - odp_rwlock_recursive_read_unlock( - &global_mem->global_recursive_rwlock); - odp_rwlock_recursive_read_unlock( - &global_mem->global_recursive_rwlock); - - /* Acquire the shared global lock */ - odp_rwlock_recursive_write_lock( - &global_mem->global_recursive_rwlock); - - /* Make sure we have lock now AND didn't previously own it */ - if (global_mem->global_lock_owner != 0) { - current_errs++; - sync_failures++; - } - - /* Now set the global_lock_owner to be us, wait a while, and - * then we see if anyone else has snuck in and changed the - * global_lock_owner to be themselves - */ - global_mem->global_lock_owner = thread_num; - odp_sync_stores(); - thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != thread_num) { - current_errs++; - sync_failures++; - } - - /* Acquire it again and verify we still own it */ - odp_rwlock_recursive_write_lock( - &global_mem->global_recursive_rwlock); - thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != thread_num) { - current_errs++; - recursive_errs++; - } - - /* Release the recursive lock and make sure we still own it */ - odp_rwlock_recursive_write_unlock( - &global_mem->global_recursive_rwlock); - thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != thread_num) { - current_errs++; - recursive_errs++; - } - - /* Release shared lock, and make sure we no longer have it */ - global_mem->global_lock_owner = 0; - odp_sync_stores(); - odp_rwlock_recursive_write_unlock( - &global_mem->global_recursive_rwlock); - if (global_mem->global_lock_owner == thread_num) { - current_errs++; - sync_failures++; - } - - if (current_errs == 0) - lock_owner_delay++; - - /* Wait a small amount of time and then rerun the test */ - thread_delay(per_thread_mem, BASE_DELAY); - - /* Try to resync all of the threads to increase contention */ - if ((rs_idx < NUM_RESYNC_BARRIERS) && - ((cnt % resync_cnt) == (resync_cnt - 1))) - odp_barrier_wait(&global_mem->barrier_array[rs_idx++]); - } - - if ((global_mem->g_verbose) && (sync_failures != 0)) - printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 - " sync_failures and %" PRIu32 - " recursive_errs in %" PRIu32 - " iterations\n", thread_num, - per_thread_mem->thread_id, - per_thread_mem->thread_core, - sync_failures, recursive_errs, iterations); - - CU_ASSERT(sync_failures == 0); - CU_ASSERT(recursive_errs == 0); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void barrier_test_init(void) -{ - uint32_t num_threads, idx; - - num_threads = global_mem->g_num_threads; - - for (idx = 0; idx < NUM_TEST_BARRIERS; idx++) { - odp_barrier_init(&global_mem->test_barriers[idx], num_threads); - custom_barrier_init(&global_mem->custom_barrier1[idx], - num_threads); - custom_barrier_init(&global_mem->custom_barrier2[idx], - num_threads); - } - - global_mem->slow_thread_num = 1; - global_mem->barrier_cnt1 = 1; - global_mem->barrier_cnt2 = 1; -} - -static void test_atomic_inc_32(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_inc_u32(&a32u); -} - -static void test_atomic_inc_64(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_inc_u64(&a64u); -} - -static void test_atomic_dec_32(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_dec_u32(&a32u); -} - -static void test_atomic_dec_64(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_dec_u64(&a64u); -} - -static void test_atomic_fetch_inc_32(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_fetch_inc_u32(&a32u); -} - -static void test_atomic_fetch_inc_64(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_fetch_inc_u64(&a64u); -} - -static void test_atomic_fetch_dec_32(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_fetch_dec_u32(&a32u); -} - -static void test_atomic_fetch_dec_64(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_fetch_dec_u64(&a64u); -} - -static void test_atomic_add_32(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_add_u32(&a32u, ADD_SUB_CNT); -} - -static void test_atomic_add_64(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_add_u64(&a64u, ADD_SUB_CNT); -} - -static void test_atomic_sub_32(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_sub_u32(&a32u, ADD_SUB_CNT); -} - -static void test_atomic_sub_64(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_sub_u64(&a64u, ADD_SUB_CNT); -} - -static void test_atomic_fetch_add_32(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_fetch_add_u32(&a32u, ADD_SUB_CNT); -} - -static void test_atomic_fetch_add_64(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_fetch_add_u64(&a64u, ADD_SUB_CNT); -} - -static void test_atomic_fetch_sub_32(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_fetch_sub_u32(&a32u, ADD_SUB_CNT); -} - -static void test_atomic_fetch_sub_64(void) -{ - int i; - - for (i = 0; i < CNT; i++) - odp_atomic_fetch_sub_u64(&a64u, ADD_SUB_CNT); -} - -static void test_atomic_inc_dec_32(void) -{ - test_atomic_inc_32(); - test_atomic_dec_32(); -} - -static void test_atomic_inc_dec_64(void) -{ - test_atomic_inc_64(); - test_atomic_dec_64(); -} - -static void test_atomic_fetch_inc_dec_32(void) -{ - test_atomic_fetch_inc_32(); - test_atomic_fetch_dec_32(); -} - -static void test_atomic_fetch_inc_dec_64(void) -{ - test_atomic_fetch_inc_64(); - test_atomic_fetch_dec_64(); -} - -static void test_atomic_add_sub_32(void) -{ - test_atomic_add_32(); - test_atomic_sub_32(); -} - -static void test_atomic_add_sub_64(void) -{ - test_atomic_add_64(); - test_atomic_sub_64(); -} - -static void test_atomic_fetch_add_sub_32(void) -{ - test_atomic_fetch_add_32(); - test_atomic_fetch_sub_32(); -} - -static void test_atomic_fetch_add_sub_64(void) -{ - test_atomic_fetch_add_64(); - test_atomic_fetch_sub_64(); -} - -static void test_atomic_init(void) -{ - odp_atomic_init_u32(&a32u, 0); - odp_atomic_init_u64(&a64u, 0); -} - -static void test_atomic_store(void) -{ - odp_atomic_store_u32(&a32u, U32_INIT_VAL); - odp_atomic_store_u64(&a64u, U64_INIT_VAL); -} - -static void test_atomic_validate(void) -{ - CU_ASSERT(U32_INIT_VAL == odp_atomic_load_u32(&a32u)); - CU_ASSERT(U64_INIT_VAL == odp_atomic_load_u64(&a64u)); -} - -/* Barrier tests */ -void synchronizers_test_no_barrier_functional(void) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - barrier_test_init(); - odp_cunit_thread_create(no_barrier_functional_test, &arg); - odp_cunit_thread_exit(&arg); -} - -void synchronizers_test_barrier_functional(void) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - barrier_test_init(); - odp_cunit_thread_create(barrier_functional_test, &arg); - odp_cunit_thread_exit(&arg); -} - -odp_testinfo_t synchronizers_suite_barrier[] = { - ODP_TEST_INFO(synchronizers_test_no_barrier_functional), - ODP_TEST_INFO(synchronizers_test_barrier_functional), - ODP_TEST_INFO_NULL -}; - -/* Thread-unsafe tests */ -void synchronizers_test_no_lock_functional(void) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - odp_cunit_thread_create(no_lock_functional_test, &arg); - odp_cunit_thread_exit(&arg); -} - -odp_testinfo_t synchronizers_suite_no_locking[] = { - ODP_TEST_INFO(synchronizers_test_no_lock_functional), - ODP_TEST_INFO_NULL -}; - -/* Spin lock tests */ -void synchronizers_test_spinlock_api(void) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - odp_cunit_thread_create(spinlock_api_tests, &arg); - odp_cunit_thread_exit(&arg); -} - -void synchronizers_test_spinlock_functional(void) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - odp_spinlock_init(&global_mem->global_spinlock); - odp_cunit_thread_create(spinlock_functional_test, &arg); - odp_cunit_thread_exit(&arg); -} - -void synchronizers_test_spinlock_recursive_api(void) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - odp_cunit_thread_create(spinlock_recursive_api_tests, &arg); - odp_cunit_thread_exit(&arg); -} - -void synchronizers_test_spinlock_recursive_functional(void) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - odp_spinlock_recursive_init(&global_mem->global_recursive_spinlock); - odp_cunit_thread_create(spinlock_recursive_functional_test, &arg); - odp_cunit_thread_exit(&arg); -} - -odp_testinfo_t synchronizers_suite_spinlock[] = { - ODP_TEST_INFO(synchronizers_test_spinlock_api), - ODP_TEST_INFO(synchronizers_test_spinlock_functional), - ODP_TEST_INFO_NULL -}; - -odp_testinfo_t synchronizers_suite_spinlock_recursive[] = { - ODP_TEST_INFO(synchronizers_test_spinlock_recursive_api), - ODP_TEST_INFO(synchronizers_test_spinlock_recursive_functional), - ODP_TEST_INFO_NULL -}; - -/* Ticket lock tests */ -void synchronizers_test_ticketlock_api(void) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - odp_cunit_thread_create(ticketlock_api_tests, &arg); - odp_cunit_thread_exit(&arg); -} - -void synchronizers_test_ticketlock_functional(void) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - odp_ticketlock_init(&global_mem->global_ticketlock); - - odp_cunit_thread_create(ticketlock_functional_test, &arg); - odp_cunit_thread_exit(&arg); -} - -odp_testinfo_t synchronizers_suite_ticketlock[] = { - ODP_TEST_INFO(synchronizers_test_ticketlock_api), - ODP_TEST_INFO(synchronizers_test_ticketlock_functional), - ODP_TEST_INFO_NULL -}; - -/* RW lock tests */ -void synchronizers_test_rwlock_api(void) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - odp_cunit_thread_create(rwlock_api_tests, &arg); - odp_cunit_thread_exit(&arg); -} - -void synchronizers_test_rwlock_functional(void) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - odp_rwlock_init(&global_mem->global_rwlock); - odp_cunit_thread_create(rwlock_functional_test, &arg); - odp_cunit_thread_exit(&arg); -} - -odp_testinfo_t synchronizers_suite_rwlock[] = { - ODP_TEST_INFO(synchronizers_test_rwlock_api), - ODP_TEST_INFO(synchronizers_test_rwlock_functional), - ODP_TEST_INFO_NULL -}; - -void synchronizers_test_rwlock_recursive_api(void) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - odp_cunit_thread_create(rwlock_recursive_api_tests, &arg); - odp_cunit_thread_exit(&arg); -} - -void synchronizers_test_rwlock_recursive_functional(void) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - odp_rwlock_recursive_init(&global_mem->global_recursive_rwlock); - odp_cunit_thread_create(rwlock_recursive_functional_test, &arg); - odp_cunit_thread_exit(&arg); -} - -odp_testinfo_t synchronizers_suite_rwlock_recursive[] = { - ODP_TEST_INFO(synchronizers_test_rwlock_recursive_api), - ODP_TEST_INFO(synchronizers_test_rwlock_recursive_functional), - ODP_TEST_INFO_NULL -}; - -int synchronizers_suite_init(void) -{ - uint32_t num_threads, idx; - - num_threads = global_mem->g_num_threads; - odp_barrier_init(&global_mem->global_barrier, num_threads); - for (idx = 0; idx < NUM_RESYNC_BARRIERS; idx++) - odp_barrier_init(&global_mem->barrier_array[idx], num_threads); - - return 0; -} - -int synchronizers_init(void) -{ - uint32_t workers_count, max_threads; - int ret = 0; - odp_cpumask_t mask; - - if (0 != odp_init_global(NULL, NULL)) { - fprintf(stderr, "error: odp_init_global() failed.\n"); - return -1; - } - if (0 != odp_init_local(ODP_THREAD_CONTROL)) { - fprintf(stderr, "error: odp_init_local() failed.\n"); - return -1; - } - - global_shm = odp_shm_reserve(GLOBAL_SHM_NAME, - sizeof(global_shared_mem_t), 64, - ODP_SHM_SW_ONLY); - if (ODP_SHM_INVALID == global_shm) { - fprintf(stderr, "Unable reserve memory for global_shm\n"); - return -1; - } - - global_mem = odp_shm_addr(global_shm); - memset(global_mem, 0, sizeof(global_shared_mem_t)); - - global_mem->g_num_threads = MAX_WORKERS; - global_mem->g_iterations = MAX_ITERATIONS; - global_mem->g_verbose = VERBOSE; - - workers_count = odp_cpumask_default_worker(&mask, 0); - - max_threads = (workers_count >= MAX_WORKERS) ? - MAX_WORKERS : workers_count; - - if (max_threads < global_mem->g_num_threads) { - printf("Requested num of threads is too large\n"); - printf("reducing from %" PRIu32 " to %" PRIu32 "\n", - global_mem->g_num_threads, - max_threads); - global_mem->g_num_threads = max_threads; - } - - printf("Num of threads used = %" PRIu32 "\n", - global_mem->g_num_threads); - - return ret; -} - -/* Atomic tests */ -static void *test_atomic_inc_dec_thread(void *arg UNUSED) -{ - per_thread_mem_t *per_thread_mem; - - per_thread_mem = thread_init(); - test_atomic_inc_dec_32(); - test_atomic_inc_dec_64(); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void *test_atomic_add_sub_thread(void *arg UNUSED) -{ - per_thread_mem_t *per_thread_mem; - - per_thread_mem = thread_init(); - test_atomic_add_sub_32(); - test_atomic_add_sub_64(); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void *test_atomic_fetch_inc_dec_thread(void *arg UNUSED) -{ - per_thread_mem_t *per_thread_mem; - - per_thread_mem = thread_init(); - test_atomic_fetch_inc_dec_32(); - test_atomic_fetch_inc_dec_64(); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void *test_atomic_fetch_add_sub_thread(void *arg UNUSED) -{ - per_thread_mem_t *per_thread_mem; - - per_thread_mem = thread_init(); - test_atomic_fetch_add_sub_32(); - test_atomic_fetch_add_sub_64(); - - thread_finalize(per_thread_mem); - - return NULL; -} - -static void test_atomic_functional(void *func_ptr(void *)) -{ - pthrd_arg arg; - - arg.numthrds = global_mem->g_num_threads; - test_atomic_init(); - test_atomic_store(); - odp_cunit_thread_create(func_ptr, &arg); - odp_cunit_thread_exit(&arg); - test_atomic_validate(); -} - -void synchronizers_test_atomic_inc_dec(void) -{ - test_atomic_functional(test_atomic_inc_dec_thread); -} - -void synchronizers_test_atomic_add_sub(void) -{ - test_atomic_functional(test_atomic_add_sub_thread); -} - -void synchronizers_test_atomic_fetch_inc_dec(void) -{ - test_atomic_functional(test_atomic_fetch_inc_dec_thread); -} - -void synchronizers_test_atomic_fetch_add_sub(void) -{ - test_atomic_functional(test_atomic_fetch_add_sub_thread); -} - -odp_testinfo_t synchronizers_suite_atomic[] = { - ODP_TEST_INFO(synchronizers_test_atomic_inc_dec), - ODP_TEST_INFO(synchronizers_test_atomic_add_sub), - ODP_TEST_INFO(synchronizers_test_atomic_fetch_inc_dec), - ODP_TEST_INFO(synchronizers_test_atomic_fetch_add_sub), - ODP_TEST_INFO_NULL, -}; - -odp_suiteinfo_t synchronizers_suites[] = { - {"barrier", NULL, NULL, - synchronizers_suite_barrier}, - {"nolocking", synchronizers_suite_init, NULL, - synchronizers_suite_no_locking}, - {"spinlock", synchronizers_suite_init, NULL, - synchronizers_suite_spinlock}, - {"spinlock_recursive", synchronizers_suite_init, NULL, - synchronizers_suite_spinlock_recursive}, - {"ticketlock", synchronizers_suite_init, NULL, - synchronizers_suite_ticketlock}, - {"rwlock", synchronizers_suite_init, NULL, - synchronizers_suite_rwlock}, - {"rwlock_recursive", synchronizers_suite_init, NULL, - synchronizers_suite_rwlock_recursive}, - {"atomic", NULL, NULL, - synchronizers_suite_atomic}, - ODP_SUITE_INFO_NULL -}; - -int synchronizers_main(void) -{ - int ret; - - odp_cunit_register_global_init(synchronizers_init); - - ret = odp_cunit_register(synchronizers_suites); - - if (ret == 0) - ret = odp_cunit_run(); - - return ret; -} diff --git a/test/validation/synchronizers/synchronizers.h b/test/validation/synchronizers/synchronizers.h deleted file mode 100644 index 9725996..0000000 --- a/test/validation/synchronizers/synchronizers.h +++ /dev/null @@ -1,53 +0,0 @@ -/* Copyright (c) 2015, Linaro Limited - * All rights reserved. - * - * SPDX-License-Identifier: BSD-3-Clause - */ - -#ifndef _ODP_TEST_SYNCHRONIZERS_H_ -#define _ODP_TEST_SYNCHRONIZERS_H_ - -#include - -/* test functions: */ -void synchronizers_test_no_barrier_functional(void); -void synchronizers_test_barrier_functional(void); -void synchronizers_test_no_lock_functional(void); -void synchronizers_test_spinlock_api(void); -void synchronizers_test_spinlock_functional(void); -void synchronizers_test_spinlock_recursive_api(void); -void synchronizers_test_spinlock_recursive_functional(void); -void synchronizers_test_ticketlock_api(void); -void synchronizers_test_ticketlock_functional(void); -void synchronizers_test_rwlock_api(void); -void synchronizers_test_rwlock_functional(void); -void synchronizers_test_rwlock_recursive_api(void); -void synchronizers_test_rwlock_recursive_functional(void); -void synchronizers_test_atomic_inc_dec(void); -void synchronizers_test_atomic_add_sub(void); -void synchronizers_test_atomic_fetch_inc_dec(void); -void synchronizers_test_atomic_fetch_add_sub(void); - -/* test arrays: */ -extern odp_testinfo_t synchronizers_suite_barrier[]; -extern odp_testinfo_t synchronizers_suite_no_locking[]; -extern odp_testinfo_t synchronizers_suite_spinlock[]; -extern odp_testinfo_t synchronizers_suite_spinlock_recursive[]; -extern odp_testinfo_t synchronizers_suite_ticketlock[]; -extern odp_testinfo_t synchronizers_suite_rwlock[]; -extern odp_testinfo_t synchronizers_suite_rwlock_recursive[]; -extern odp_testinfo_t synchronizers_suite_atomic[]; - -/* test array init/term functions: */ -int synchronizers_suite_init(void); - -/* test registry: */ -extern odp_suiteinfo_t synchronizers_suites[]; - -/* executable init/term functions: */ -int synchronizers_init(void); - -/* main test program: */ -int synchronizers_main(void); - -#endif diff --git a/test/validation/synchronizers/synchronizers_main.c b/test/validation/synchronizers/synchronizers_main.c deleted file mode 100644 index 659d315..0000000 --- a/test/validation/synchronizers/synchronizers_main.c +++ /dev/null @@ -1,12 +0,0 @@ -/* Copyright (c) 2015, Linaro Limited - * All rights reserved. - * - * SPDX-License-Identifier: BSD-3-Clause - */ - -#include "synchronizers.h" - -int main(void) -{ - return synchronizers_main(); -}