From patchwork Wed Jul 1 16:17:06 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Milard X-Patchwork-Id: 50525 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f200.google.com (mail-lb0-f200.google.com [209.85.217.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1E685229DF for ; Wed, 1 Jul 2015 16:20:03 +0000 (UTC) Received: by lbcui10 with SMTP id ui10sf6830319lbc.0 for ; Wed, 01 Jul 2015 09:20:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:in-reply-to:references:cc:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:errors-to :sender:x-original-sender:x-original-authentication-results :mailing-list; bh=5HF7nNTonchMRSt7INaC30XNUDw93MhL/y5pOh6LpbI=; b=VUSBrHvHfKrvjU9KWdtl2OY725DzcND7AwTIiqhkoB7SbezP01ErhOjChBzjAH3B5h 6H1WGrJZ50+2IDGlmEEGsjI7lBNfv5ATa+3OfcXNT3A/D/IV1GGqBNAvb5cthCRXesJI OYaWv7hzzFzIAr0wYqp1j1t9PLE2RuQZ/mjnnumIlNeH6pL5RxfuMoqnpqJfdorbTByN L8yV4u3k+tICKWwm1VxAw4mYlyM0gcPDxqXCbKeE6VSknCmd7w81JZQkhckpQqQjlFzJ tEuf/l9M38X7vF+LupAzfOTneCHQBLryagksFLZQm5nbLsux1POaHc1eBtmKUaQzTvgY x5BQ== X-Gm-Message-State: ALoCoQm6yw1DnICzlEqgoSaJ4ysK7m1Lb2GS/EQPQdw3W6g8bg8NyxXPfnVE/gtQP9X4fc5QvqIS X-Received: by 10.194.81.136 with SMTP id a8mr17620210wjy.2.1435767601805; Wed, 01 Jul 2015 09:20:01 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.36.227 with SMTP id t3ls224951laj.72.gmail; Wed, 01 Jul 2015 09:20:01 -0700 (PDT) X-Received: by 10.153.7.133 with SMTP id dc5mr26037553lad.17.1435767601368; Wed, 01 Jul 2015 09:20:01 -0700 (PDT) Received: from mail-la0-f51.google.com (mail-la0-f51.google.com. [209.85.215.51]) by mx.google.com with ESMTPS id az11si2121240lab.27.2015.07.01.09.20.01 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 01 Jul 2015 09:20:01 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) client-ip=209.85.215.51; Received: by laar3 with SMTP id r3so44277861laa.0 for ; Wed, 01 Jul 2015 09:20:01 -0700 (PDT) X-Received: by 10.152.206.75 with SMTP id lm11mr25548451lac.41.1435767601218; Wed, 01 Jul 2015 09:20:01 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp715266lbb; Wed, 1 Jul 2015 09:20:00 -0700 (PDT) X-Received: by 10.55.40.202 with SMTP id o71mr22930130qko.108.1435767599820; Wed, 01 Jul 2015 09:19:59 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id w133si2777861qha.110.2015.07.01.09.19.59; Wed, 01 Jul 2015 09:19:59 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id 0D60961F33; Wed, 1 Jul 2015 16:19:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from ip-10-142-244-252.ec2.internal (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 0C0EE61F23; Wed, 1 Jul 2015 16:18:11 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 3E35861F4A; Wed, 1 Jul 2015 16:18:06 +0000 (UTC) Received: from mail-la0-f42.google.com (mail-la0-f42.google.com [209.85.215.42]) by lists.linaro.org (Postfix) with ESMTPS id 0AE6161F1F for ; Wed, 1 Jul 2015 16:17:36 +0000 (UTC) Received: by lagx9 with SMTP id x9so44093273lag.1 for ; Wed, 01 Jul 2015 09:17:35 -0700 (PDT) X-Received: by 10.152.207.105 with SMTP id lv9mr26276107lac.10.1435767454935; Wed, 01 Jul 2015 09:17:34 -0700 (PDT) Received: from erachmi-VirtualBox.ki.sw.ericsson.se (c-83-233-90-46.cust.bredband2.com. [83.233.90.46]) by mx.google.com with ESMTPSA id o8sm627324laf.7.2015.07.01.09.17.33 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 01 Jul 2015 09:17:33 -0700 (PDT) From: Christophe Milard To: anders.roxell@linaro.org, mike.holmes@linaro.org, stuart.haslam@linaro.org, maxim.uvarov@linaro.org Date: Wed, 1 Jul 2015 18:17:06 +0200 Message-Id: <1435767428-22409-4-git-send-email-christophe.milard@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1435767428-22409-1-git-send-email-christophe.milard@linaro.org> References: <1435767428-22409-1-git-send-email-christophe.milard@linaro.org> X-Topics: patch Cc: lng-odp@lists.linaro.org Subject: [lng-odp] [PATCH 3/5] validation: cosmetic change in odp_synchronizers.c X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christophe.milard@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 To please check-patch as much as possible before the file gets moved (and rechecked by check-odp) Signed-off-by: Christophe Milard --- test/validation/odp_synchronizers.c | 62 ++++++++++++++++++++----------------- 1 file changed, 34 insertions(+), 28 deletions(-) diff --git a/test/validation/odp_synchronizers.c b/test/validation/odp_synchronizers.c index 0e8c846..45348d1 100644 --- a/test/validation/odp_synchronizers.c +++ b/test/validation/odp_synchronizers.c @@ -123,7 +123,7 @@ static per_thread_mem_t *thread_init(void) global_shm = odp_shm_lookup(GLOBAL_SHM_NAME); global_mem = odp_shm_addr(global_shm); - CU_ASSERT(global_mem != NULL); + CU_ASSERT_PTR_NOT_NULL(global_mem); per_thread_mem->global_mem = global_mem; @@ -181,8 +181,8 @@ static uint32_t barrier_test(per_thread_mem_t *per_thread_mem, barrier_cnt2 = global_mem->barrier_cnt2; if ((barrier_cnt1 != cnt) || (barrier_cnt2 != cnt)) { - printf("thread_num=%"PRIu32" barrier_cnts of %"PRIu32 - " %"PRIu32" cnt=%"PRIu32"\n", + printf("thread_num=%" PRIu32 " barrier_cnts of %" PRIu32 + " %" PRIu32 " cnt=%" PRIu32 "\n", thread_num, barrier_cnt1, barrier_cnt2, cnt); barrier_errs++; } @@ -231,10 +231,10 @@ static uint32_t barrier_test(per_thread_mem_t *per_thread_mem, } if ((global_mem->g_verbose) && (barrier_errs != 0)) - printf("\nThread %"PRIu32" (id=%d core=%d) had %"PRIu32 - " barrier_errs in %"PRIu32" iterations\n", thread_num, - per_thread_mem->thread_id, - per_thread_mem->thread_core, barrier_errs, iterations); + printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 + " barrier_errs in %" PRIu32 " iterations\n", thread_num, + per_thread_mem->thread_id, + per_thread_mem->thread_core, barrier_errs, iterations); return barrier_errs; } @@ -435,8 +435,9 @@ static void *no_lock_functional_test(void *arg UNUSED) } if (global_mem->g_verbose) - printf("\nThread %"PRIu32" (id=%d core=%d) had %"PRIu32" sync_failures" - " in %"PRIu32" iterations\n", thread_num, + printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 + " sync_failures in %" PRIu32 " iterations\n", + thread_num, per_thread_mem->thread_id, per_thread_mem->thread_core, sync_failures, iterations); @@ -523,8 +524,10 @@ static void *spinlock_functional_test(void *arg UNUSED) if ((global_mem->g_verbose) && ((sync_failures != 0) || (is_locked_errs != 0))) - printf("\nThread %"PRIu32" (id=%d core=%d) had %"PRIu32" sync_failures" - " and %"PRIu32" is_locked_errs in %"PRIu32" iterations\n", thread_num, + printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 + " sync_failures and %" PRIu32 + " is_locked_errs in %" PRIu32 + " iterations\n", thread_num, per_thread_mem->thread_id, per_thread_mem->thread_core, sync_failures, is_locked_errs, iterations); @@ -608,8 +611,10 @@ static void *ticketlock_functional_test(void *arg UNUSED) if ((global_mem->g_verbose) && ((sync_failures != 0) || (is_locked_errs != 0))) - printf("\nThread %"PRIu32" (id=%d core=%d) had %"PRIu32" sync_failures" - " and %"PRIu32" is_locked_errs in %"PRIu32" iterations\n", thread_num, + printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 + " sync_failures and %" PRIu32 + " is_locked_errs in %" PRIu32 " iterations\n", + thread_num, per_thread_mem->thread_id, per_thread_mem->thread_core, sync_failures, is_locked_errs, iterations); @@ -686,8 +691,8 @@ static void *rwlock_functional_test(void *arg UNUSED) } if ((global_mem->g_verbose) && (sync_failures != 0)) - printf("\nThread %"PRIu32" (id=%d core=%d) had %"PRIu32" sync_failures" - " in %"PRIu32" iterations\n", thread_num, + printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 + " sync_failures in %" PRIu32 " iterations\n", thread_num, per_thread_mem->thread_id, per_thread_mem->thread_core, sync_failures, iterations); @@ -876,7 +881,6 @@ static void test_atomic_add_sub_32(void) test_atomic_sub_32(); } - static void test_atomic_add_sub_64(void) { test_atomic_add_64(); @@ -917,8 +921,8 @@ static void test_atomic_validate(void) static void synchronizers_test_no_barrier_functional(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_threads; barrier_test_init(); odp_cunit_thread_create(no_barrier_functional_test, &arg); odp_cunit_thread_exit(&arg); @@ -927,8 +931,8 @@ static void synchronizers_test_no_barrier_functional(void) static void synchronizers_test_barrier_functional(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_threads; barrier_test_init(); odp_cunit_thread_create(barrier_functional_test, &arg); odp_cunit_thread_exit(&arg); @@ -944,8 +948,8 @@ static CU_TestInfo synchronizers_suite_barrier[] = { static void synchronizers_test_no_lock_functional(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_threads; odp_cunit_thread_create(no_lock_functional_test, &arg); odp_cunit_thread_exit(&arg); } @@ -959,8 +963,8 @@ static CU_TestInfo synchronizers_suite_no_locking[] = { static void synchronizers_test_spinlock_api(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_threads; odp_cunit_thread_create(spinlock_api_tests, &arg); odp_cunit_thread_exit(&arg); } @@ -968,8 +972,8 @@ static void synchronizers_test_spinlock_api(void) static void synchronizers_test_spinlock_functional(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_threads; odp_spinlock_init(&global_mem->global_spinlock); odp_cunit_thread_create(spinlock_functional_test, &arg); odp_cunit_thread_exit(&arg); @@ -985,8 +989,8 @@ static CU_TestInfo synchronizers_suite_spinlock[] = { static void synchronizers_test_ticketlock_api(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_threads; odp_cunit_thread_create(ticketlock_api_tests, &arg); odp_cunit_thread_exit(&arg); } @@ -994,6 +998,7 @@ static void synchronizers_test_ticketlock_api(void) static void synchronizers_test_ticketlock_functional(void) { pthrd_arg arg; + arg.numthrds = global_mem->g_num_threads; odp_ticketlock_init(&global_mem->global_ticketlock); @@ -1011,8 +1016,8 @@ static CU_TestInfo synchronizers_suite_ticketlock[] = { static void synchronizers_test_rwlock_api(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_threads; odp_cunit_thread_create(rwlock_api_tests, &arg); odp_cunit_thread_exit(&arg); } @@ -1020,8 +1025,8 @@ static void synchronizers_test_rwlock_api(void) static void synchronizers_test_rwlock_functional(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_threads; odp_rwlock_init(&global_mem->global_rwlock); odp_cunit_thread_create(rwlock_functional_test, &arg); odp_cunit_thread_exit(&arg); @@ -1033,7 +1038,6 @@ static CU_TestInfo synchronizers_suite_rwlock[] = { CU_TEST_INFO_NULL }; - static int synchronizers_suite_init(void) { uint32_t num_threads, idx; @@ -1081,12 +1085,14 @@ int tests_global_init(void) if (max_threads < global_mem->g_num_threads) { printf("Requested num of threads is too large\n"); - printf("reducing from %"PRIu32" to %"PRIu32"\n", global_mem->g_num_threads, + printf("reducing from %" PRIu32 " to %" PRIu32 "\n", + global_mem->g_num_threads, max_threads); global_mem->g_num_threads = max_threads; } - printf("Num of threads used = %"PRIu32"\n", global_mem->g_num_threads); + printf("Num of threads used = %" PRIu32 "\n", + global_mem->g_num_threads); return ret; } @@ -1147,8 +1153,8 @@ static void *test_atomic_fetch_add_sub_thread(void *arg UNUSED) static void test_atomic_functional(void *func_ptr(void *)) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_threads; test_atomic_init(); test_atomic_store(); odp_cunit_thread_create(func_ptr, &arg);