From patchwork Tue Jun 30 16:14:54 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Milard X-Patchwork-Id: 50487 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f69.google.com (mail-la0-f69.google.com [209.85.215.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6DFE2229DF for ; Tue, 30 Jun 2015 16:17:57 +0000 (UTC) Received: by lagh6 with SMTP id h6sf5268391lag.0 for ; Tue, 30 Jun 2015 09:17:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:in-reply-to:references:cc:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:errors-to :sender:x-original-sender:x-original-authentication-results :mailing-list; bh=j/SUxfw4TxEopP7GlOsuQWbvilN+FBlpF1GhIYJDV8s=; b=frwjkISh6yIDr5/bbX3Mvs4zhSZIPOOLLICqQzxTub0hjHjcAo7r7jzdZ7pQzRhYFG yHJypVw5vyZLtEPqtY+RfvZdM4NDIkQPs0Qj+bEkyKca85TIPO0hIBnFKTXD3yPxxM0v KjJG9l9KkwpH9aOjQK6EdfwqL01aTftDNTgseawFqjrOi8bVcOjR1tt+oaFBgHmmfG/4 cn1BctGbpDOfTLbKsgHpFPmCAi5E9khgmOk+rs7glaAUpztFVhyUkm/yqS2Wo99QjPG3 0AtjxgiBEfL3D0JYppgWUPgbHPkPNvtOhMWLHOvQ+sSUQe1nPqRiIufcoOAEqHv8I5qq +jqA== X-Gm-Message-State: ALoCoQmt/I2KWuB0UVyoujnLsnGTup7MkT6BeLN/Z0UOJdrs8CJl4l7Yknrj5bRH1d09hrILWJmh X-Received: by 10.112.200.163 with SMTP id jt3mr14456496lbc.17.1435681076178; Tue, 30 Jun 2015 09:17:56 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.120.9 with SMTP id ky9ls58939lab.98.gmail; Tue, 30 Jun 2015 09:17:56 -0700 (PDT) X-Received: by 10.112.47.73 with SMTP id b9mr20472081lbn.46.1435681076026; Tue, 30 Jun 2015 09:17:56 -0700 (PDT) Received: from mail-la0-f51.google.com (mail-la0-f51.google.com. [209.85.215.51]) by mx.google.com with ESMTPS id r3si23204873lal.22.2015.06.30.09.17.55 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 30 Jun 2015 09:17:55 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) client-ip=209.85.215.51; Received: by lagc2 with SMTP id c2so20169227lag.3 for ; Tue, 30 Jun 2015 09:17:55 -0700 (PDT) X-Received: by 10.152.36.161 with SMTP id r1mr20490705laj.88.1435681075878; Tue, 30 Jun 2015 09:17:55 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp2574258lbb; Tue, 30 Jun 2015 09:17:54 -0700 (PDT) X-Received: by 10.55.31.226 with SMTP id n95mr44916645qkh.38.1435681074514; Tue, 30 Jun 2015 09:17:54 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id 21si45694194qkw.0.2015.06.30.09.17.53; Tue, 30 Jun 2015 09:17:54 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id 7837561F07; Tue, 30 Jun 2015 16:17:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from ip-10-142-244-252.ec2.internal (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 03E5B61B2A; Tue, 30 Jun 2015 16:15:47 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 6F67261B2A; Tue, 30 Jun 2015 16:15:42 +0000 (UTC) Received: from mail-la0-f52.google.com (mail-la0-f52.google.com [209.85.215.52]) by lists.linaro.org (Postfix) with ESMTPS id 063CE61F10 for ; Tue, 30 Jun 2015 16:15:11 +0000 (UTC) Received: by laar3 with SMTP id r3so20195254laa.0 for ; Tue, 30 Jun 2015 09:15:09 -0700 (PDT) X-Received: by 10.152.7.206 with SMTP id l14mr20391569laa.3.1435680909527; Tue, 30 Jun 2015 09:15:09 -0700 (PDT) Received: from erachmi-VirtualBox.ki.sw.ericsson.se (c-83-233-90-46.cust.bredband2.com. [83.233.90.46]) by mx.google.com with ESMTPSA id x4sm9024476lag.40.2015.06.30.09.15.07 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 30 Jun 2015 09:15:08 -0700 (PDT) From: Christophe Milard To: anders.roxell@linaro.org, mike.holmes@linaro.org, stuart.haslam@linaro.org, maxim.uvarov@linaro.org Date: Tue, 30 Jun 2015 18:14:54 +0200 Message-Id: <1435680896-11924-4-git-send-email-christophe.milard@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1435680896-11924-1-git-send-email-christophe.milard@linaro.org> References: <1435680896-11924-1-git-send-email-christophe.milard@linaro.org> X-Topics: patch Cc: lng-odp@lists.linaro.org Subject: [lng-odp] [PATCH 3/5] validation: cosmetic changes in odp_scheduler.c X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christophe.milard@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Changes to calm down check-patch called via check-odp when the file is moved, at next patch. A few things remains, but not sure they'd make the code more readable... Signed-off-by: Christophe Milard --- test/validation/odp_scheduler.c | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/test/validation/odp_scheduler.c b/test/validation/odp_scheduler.c index e0aa8e6..2f2e627 100644 --- a/test/validation/odp_scheduler.c +++ b/test/validation/odp_scheduler.c @@ -8,7 +8,7 @@ #include "odp_cunit_common.h" #define MAX_WORKERS_THREADS 32 -#define MSG_POOL_SIZE (4*1024*1024) +#define MSG_POOL_SIZE (4 * 1024 * 1024) #define QUEUES_PER_PRIO 16 #define BUF_SIZE 64 #define TEST_NUM_BUFS 100 @@ -312,7 +312,7 @@ static void schedule_common(odp_schedule_sync_t sync, int num_queues, shm = odp_shm_lookup(GLOBALS_SHM_NAME); CU_ASSERT_FATAL(shm != ODP_SHM_INVALID); globals = odp_shm_addr(shm); - CU_ASSERT_FATAL(globals != NULL); + CU_ASSERT_PTR_NOT_NULL_FATAL(globals); args.globals = globals; args.sync = sync; @@ -339,12 +339,12 @@ static void parallel_execute(odp_schedule_sync_t sync, int num_queues, shm = odp_shm_lookup(GLOBALS_SHM_NAME); CU_ASSERT_FATAL(shm != ODP_SHM_INVALID); globals = odp_shm_addr(shm); - CU_ASSERT_FATAL(globals != NULL); + CU_ASSERT_PTR_NOT_NULL_FATAL(globals); shm = odp_shm_lookup(SHM_THR_ARGS_NAME); CU_ASSERT_FATAL(shm != ODP_SHM_INVALID); args = odp_shm_addr(shm); - CU_ASSERT_FATAL(args != NULL); + CU_ASSERT_PTR_NOT_NULL_FATAL(args); args->globals = globals; args->sync = sync; @@ -410,6 +410,7 @@ static void scheduler_test_mq_1t_o(void) static void scheduler_test_mq_1t_prio_n(void) { int prio = odp_schedule_num_prio(); + schedule_common(ODP_SCHED_SYNC_NONE, MANY_QS, prio, SCHD_ONE); } @@ -417,6 +418,7 @@ static void scheduler_test_mq_1t_prio_n(void) static void scheduler_test_mq_1t_prio_a(void) { int prio = odp_schedule_num_prio(); + schedule_common(ODP_SCHED_SYNC_ATOMIC, MANY_QS, prio, SCHD_ONE); } @@ -424,6 +426,7 @@ static void scheduler_test_mq_1t_prio_a(void) static void scheduler_test_mq_1t_prio_o(void) { int prio = odp_schedule_num_prio(); + schedule_common(ODP_SCHED_SYNC_ORDERED, MANY_QS, prio, SCHD_ONE); } @@ -431,6 +434,7 @@ static void scheduler_test_mq_1t_prio_o(void) static void scheduler_test_mq_mt_prio_n(void) { int prio = odp_schedule_num_prio(); + parallel_execute(ODP_SCHED_SYNC_NONE, MANY_QS, prio, SCHD_ONE, DISABLE_EXCL_ATOMIC); } @@ -439,6 +443,7 @@ static void scheduler_test_mq_mt_prio_n(void) static void scheduler_test_mq_mt_prio_a(void) { int prio = odp_schedule_num_prio(); + parallel_execute(ODP_SCHED_SYNC_ATOMIC, MANY_QS, prio, SCHD_ONE, DISABLE_EXCL_ATOMIC); } @@ -447,6 +452,7 @@ static void scheduler_test_mq_mt_prio_a(void) static void scheduler_test_mq_mt_prio_o(void) { int prio = odp_schedule_num_prio(); + parallel_execute(ODP_SCHED_SYNC_ORDERED, MANY_QS, prio, SCHD_ONE, DISABLE_EXCL_ATOMIC); } @@ -500,6 +506,7 @@ static void scheduler_test_multi_mq_1t_o(void) static void scheduler_test_multi_mq_1t_prio_n(void) { int prio = odp_schedule_num_prio(); + schedule_common(ODP_SCHED_SYNC_NONE, MANY_QS, prio, SCHD_MULTI); } @@ -507,6 +514,7 @@ static void scheduler_test_multi_mq_1t_prio_n(void) static void scheduler_test_multi_mq_1t_prio_a(void) { int prio = odp_schedule_num_prio(); + schedule_common(ODP_SCHED_SYNC_ATOMIC, MANY_QS, prio, SCHD_MULTI); } @@ -514,6 +522,7 @@ static void scheduler_test_multi_mq_1t_prio_a(void) static void scheduler_test_multi_mq_1t_prio_o(void) { int prio = odp_schedule_num_prio(); + schedule_common(ODP_SCHED_SYNC_ORDERED, MANY_QS, prio, SCHD_MULTI); } @@ -521,6 +530,7 @@ static void scheduler_test_multi_mq_1t_prio_o(void) static void scheduler_test_multi_mq_mt_prio_n(void) { int prio = odp_schedule_num_prio(); + parallel_execute(ODP_SCHED_SYNC_NONE, MANY_QS, prio, SCHD_MULTI, 0); } @@ -528,6 +538,7 @@ static void scheduler_test_multi_mq_mt_prio_n(void) static void scheduler_test_multi_mq_mt_prio_a(void) { int prio = odp_schedule_num_prio(); + parallel_execute(ODP_SCHED_SYNC_ATOMIC, MANY_QS, prio, SCHD_MULTI, 0); } @@ -535,6 +546,7 @@ static void scheduler_test_multi_mq_mt_prio_a(void) static void scheduler_test_multi_mq_mt_prio_o(void) { int prio = odp_schedule_num_prio(); + parallel_execute(ODP_SCHED_SYNC_ORDERED, MANY_QS, prio, SCHD_MULTI, 0); } @@ -560,7 +572,6 @@ static void scheduler_test_pause_resume(void) pool = odp_pool_lookup(MSG_POOL_NAME); CU_ASSERT_FATAL(pool != ODP_POOL_INVALID); - for (i = 0; i < NUM_BUFS_PAUSE; i++) { buf = odp_buffer_alloc(pool); CU_ASSERT_FATAL(buf != ODP_BUFFER_INVALID); @@ -661,7 +672,7 @@ static int scheduler_suite_init(void) params.buf.size = BUF_SIZE; params.buf.align = 0; - params.buf.num = MSG_POOL_SIZE/BUF_SIZE; + params.buf.num = MSG_POOL_SIZE / BUF_SIZE; params.type = ODP_POOL_BUFFER; pool = odp_pool_create(MSG_POOL_NAME, ODP_SHM_NULL, ¶ms); @@ -676,7 +687,7 @@ static int scheduler_suite_init(void) globals = odp_shm_addr(shm); - if (globals == NULL) { + if (!globals) { printf("Shared memory reserve failed (globals).\n"); return -1; } @@ -691,7 +702,7 @@ static int scheduler_suite_init(void) ODP_CACHE_LINE_SIZE, 0); args = odp_shm_addr(shm); - if (args == NULL) { + if (!args) { printf("Shared memory reserve failed (args).\n"); return -1; }