From patchwork Wed Sep 2 02:50:16 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 52950 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f199.google.com (mail-lb0-f199.google.com [209.85.217.199]) by patches.linaro.org (Postfix) with ESMTPS id 2D1972157D for ; Wed, 2 Sep 2015 02:51:01 +0000 (UTC) Received: by lbcjc2 with SMTP id jc2sf2441431lbc.0 for ; Tue, 01 Sep 2015 19:51:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:in-reply-to:references:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:errors-to :sender:x-original-sender:x-original-authentication-results :mailing-list; bh=x5ZKc5OPgvt4W0zODKAfhuZJX2uyf1Fg2WtgzcxT0aY=; b=FNrBoB2gDgNWHi8dg3CyDdxfmXZShEBMgel2sJENOqcF5dmBPkrFGONmL9GOFAptUP gH0MfQj5P/c8GVrzLY8QewiQrYfjbRtq5Kb6Hlq6w0iP4387q8UOYNIuD5cyyr1GJU2R 57Tqd/fg5lKh6CQuHW7cev6mAIiuvk1cqUSOXqmECyT+54B4lNHf2mn4zOfxW3PURaJ5 jA1x/o6gmQhJol26WepfGnNIKyIbbBI7Dw6swmC83DwkuyfbIlez/DIi6/kX7CR2Hfxz GRD3fw21iMuNFGi6fieqcfZ2VdXvXV1FDa5MykZ7di4Uf2BHcnx89w/VE7gTt9bKkY1A gmCQ== X-Gm-Message-State: ALoCoQlCyuj916+RJjNu8cyytMT4K0rBHzruQcjniYYXuvZ8bpDW446VLtCHp536SlJONyq9TUfo X-Received: by 10.112.122.42 with SMTP id lp10mr8470560lbb.5.1441162260150; Tue, 01 Sep 2015 19:51:00 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.5.202 with SMTP id u10ls20450lau.23.gmail; Tue, 01 Sep 2015 19:51:00 -0700 (PDT) X-Received: by 10.112.162.2 with SMTP id xw2mr14401515lbb.98.1441162260010; Tue, 01 Sep 2015 19:51:00 -0700 (PDT) Received: from mail-lb0-f179.google.com (mail-lb0-f179.google.com. [209.85.217.179]) by mx.google.com with ESMTPS id e10si18329372lam.108.2015.09.01.19.50.59 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Sep 2015 19:50:59 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.179 as permitted sender) client-ip=209.85.217.179; Received: by lbpo4 with SMTP id o4so10230795lbp.2 for ; Tue, 01 Sep 2015 19:50:59 -0700 (PDT) X-Received: by 10.112.166.2 with SMTP id zc2mr14307558lbb.29.1441162259554; Tue, 01 Sep 2015 19:50:59 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.164.42 with SMTP id yn10csp246534lbb; Tue, 1 Sep 2015 19:50:57 -0700 (PDT) X-Received: by 10.140.237.70 with SMTP id i67mr53827874qhc.48.1441162257788; Tue, 01 Sep 2015 19:50:57 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id j69si9952238qgf.109.2015.09.01.19.50.57; Tue, 01 Sep 2015 19:50:57 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id 111D96182C; Wed, 2 Sep 2015 02:50:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 005FE61823; Wed, 2 Sep 2015 02:50:29 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id C83F061202; Wed, 2 Sep 2015 02:50:26 +0000 (UTC) Received: from mail-oi0-f46.google.com (mail-oi0-f46.google.com [209.85.218.46]) by lists.linaro.org (Postfix) with ESMTPS id 57EA1611FB for ; Wed, 2 Sep 2015 02:50:25 +0000 (UTC) Received: by oiev17 with SMTP id v17so11383140oie.1 for ; Tue, 01 Sep 2015 19:50:25 -0700 (PDT) X-Received: by 10.202.168.213 with SMTP id r204mr3345594oie.44.1441162224765; Tue, 01 Sep 2015 19:50:24 -0700 (PDT) Received: from Ubuntu15.localdomain (cpe-24-28-70-239.austin.res.rr.com. [24.28.70.239]) by smtp.gmail.com with ESMTPSA id wa2sm13688141oeb.2.2015.09.01.19.50.24 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 01 Sep 2015 19:50:24 -0700 (PDT) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Tue, 1 Sep 2015 21:50:16 -0500 Message-Id: <1441162216-15540-2-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1441162216-15540-1-git-send-email-bill.fischofer@linaro.org> References: <1441162216-15540-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [PATCH 2/2] validation: schedule: add missing coverage for new APIs X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.179 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Bill Fischofer --- test/validation/scheduler/scheduler.c | 130 +++++++++++++++++++++++++++++++++- 1 file changed, 129 insertions(+), 1 deletion(-) diff --git a/test/validation/scheduler/scheduler.c b/test/validation/scheduler/scheduler.c index 2ca8b21..5b0293e 100644 --- a/test/validation/scheduler/scheduler.c +++ b/test/validation/scheduler/scheduler.c @@ -20,6 +20,7 @@ #define GLOBALS_SHM_NAME "test_globals" #define MSG_POOL_NAME "msg_pool" +#define QUEUE_CTX_POOL_NAME "queue_ctx_pool" #define SHM_MSG_POOL_NAME "shm_msg_pool" #define SHM_THR_ARGS_NAME "shm_thr_args" @@ -59,7 +60,19 @@ typedef struct { int enable_excl_atomic; } thread_args_t; +typedef struct { + uint64_t sequence; +} buf_contents; + +typedef struct { + odp_buffer_t ctx_handle; + uint64_t sequence; + uint64_t lock_sequence; + odp_schedule_order_lock_t order_lock; +} queue_context; + odp_pool_t pool; +odp_pool_t queue_ctx_pool; static int exit_schedule_loop(void) { @@ -327,6 +340,12 @@ void scheduler_test_groups(void) rc = odp_schedule_group_join(mygrp1, &mymask); CU_ASSERT_FATAL(rc == 0); + /* Tell scheduler we're about to request an event. + * Not needed, but a convenient place to test this API. + */ + odp_schedule_prefetch(1); + + /* Now get the event from Queue 1 */ ev = odp_schedule(&from, ODP_SCHED_WAIT); CU_ASSERT_FATAL(ev != ODP_EVENT_INVALID); CU_ASSERT_FATAL(from == queue_grp1); @@ -350,6 +369,8 @@ void scheduler_test_groups(void) CU_ASSERT_FATAL(odp_queue_destroy(queue_grp2) == 0); } + CU_ASSERT_FATAL(odp_schedule_group_destroy(mygrp1) == 0); + CU_ASSERT_FATAL(odp_schedule_group_destroy(mygrp2) == 0); CU_ASSERT_FATAL(odp_pool_destroy(p) == 0); } @@ -358,6 +379,8 @@ static void *schedule_common_(void *arg) thread_args_t *args = (thread_args_t *)arg; odp_schedule_sync_t sync; test_globals_t *globals; + queue_context *qctx; + buf_contents *bctx; globals = args->globals; sync = args->sync; @@ -389,6 +412,17 @@ static void *schedule_common_(void *arg) if (num == 0) continue; + if (sync == ODP_SCHED_SYNC_ORDERED) { + qctx = odp_queue_context(from); + bctx = odp_buffer_addr( + odp_buffer_from_event(events[0])); + odp_schedule_order_lock(&qctx->order_lock); + CU_ASSERT(bctx->sequence == + qctx->lock_sequence); + qctx->lock_sequence += num; + odp_schedule_order_unlock(&qctx->order_lock); + } + for (j = 0; j < num; j++) odp_event_free(events[j]); } else { @@ -397,6 +431,15 @@ static void *schedule_common_(void *arg) if (buf == ODP_BUFFER_INVALID) continue; num = 1; + if (sync == ODP_SCHED_SYNC_ORDERED) { + qctx = odp_queue_context(from); + bctx = odp_buffer_addr(buf); + odp_schedule_order_lock(&qctx->order_lock); + CU_ASSERT(bctx->sequence == + qctx->lock_sequence); + qctx->lock_sequence += num; + odp_schedule_order_unlock(&qctx->order_lock); + } odp_buffer_free(buf); } @@ -484,6 +527,13 @@ static void fill_queues(thread_args_t *args) buf = odp_buffer_alloc(pool); CU_ASSERT_FATAL(buf != ODP_BUFFER_INVALID); ev = odp_buffer_to_event(buf); + if (sync == ODP_SCHED_SYNC_ORDERED) { + queue_context *qctx = + odp_queue_context(queue); + buf_contents *bctx = + odp_buffer_addr(buf); + bctx->sequence = qctx->sequence++; + } if (!(CU_ASSERT(odp_queue_enq(queue, ev) == 0))) odp_buffer_free(buf); else @@ -495,6 +545,32 @@ static void fill_queues(thread_args_t *args) globals->buf_count = buf_count; } +static void reset_queues(thread_args_t *args) +{ + int i, j, k; + int num_prio = args->num_prio; + int num_queues = args->num_queues; + char name[32]; + + for (i = 0; i < num_prio; i++) { + for (j = 0; j < num_queues; j++) { + odp_queue_t queue; + + snprintf(name, sizeof(name), + "sched_%d_%d_o", i, j); + queue = odp_queue_lookup(name); + CU_ASSERT_FATAL(queue != ODP_QUEUE_INVALID); + + for (k = 0; k < args->num_bufs; k++) { + queue_context *qctx = + odp_queue_context(queue); + qctx->sequence = 0; + qctx->lock_sequence = 0; + } + } + } +} + static void schedule_common(odp_schedule_sync_t sync, int num_queues, int num_prio, int enable_schd_multi) { @@ -519,6 +595,8 @@ static void schedule_common(odp_schedule_sync_t sync, int num_queues, fill_queues(&args); schedule_common_(&args); + if (sync == ODP_SCHED_SYNC_ORDERED) + reset_queues(&args); } static void parallel_execute(odp_schedule_sync_t sync, int num_queues, @@ -559,6 +637,10 @@ static void parallel_execute(odp_schedule_sync_t sync, int num_queues, /* Wait for worker threads to terminate */ odp_cunit_thread_exit(&args->cu_thr); + + /* Cleanup ordered queues for next pass */ + if (sync == ODP_SCHED_SYNC_ORDERED) + reset_queues(args); } /* 1 queue 1 thread ODP_SCHED_SYNC_NONE */ @@ -810,9 +892,23 @@ void scheduler_test_pause_resume(void) static int create_queues(void) { - int i, j, prios; + int i, j, prios, rc; + odp_pool_param_t params; + odp_buffer_t queue_ctx_buf; + queue_context *qctx; prios = odp_schedule_num_prio(); + odp_pool_param_init(¶ms); + params.buf.size = sizeof(queue_context); + params.buf.num = prios * QUEUES_PER_PRIO; + params.type = ODP_POOL_BUFFER; + + queue_ctx_pool = odp_pool_create(QUEUE_CTX_POOL_NAME, ¶ms); + + if (queue_ctx_pool == ODP_POOL_INVALID) { + printf("Pool creation failed (queue ctx).\n"); + return -1; + } for (i = 0; i < prios; i++) { odp_queue_param_t p; @@ -850,6 +946,31 @@ static int create_queues(void) printf("Schedule queue create failed.\n"); return -1; } + + queue_ctx_buf = odp_buffer_alloc(queue_ctx_pool); + + if (queue_ctx_buf == ODP_BUFFER_INVALID) { + printf("Cannot allocate queue ctx buf\n"); + return -1; + } + + qctx = odp_buffer_addr(queue_ctx_buf); + qctx->ctx_handle = queue_ctx_buf; + qctx->sequence = 0; + qctx->lock_sequence = 0; + rc = odp_schedule_order_lock_init(&qctx->order_lock, q); + + if (rc != 0) { + printf("Ordered lock init failed\n"); + return -1; + } + + rc = odp_queue_context_set(q, qctx); + + if (rc != 0) { + printf("Cannot set queue context\n"); + return -1; + } } } @@ -919,11 +1040,15 @@ int scheduler_suite_init(void) static int destroy_queue(const char *name) { odp_queue_t q; + queue_context *qctx; q = odp_queue_lookup(name); if (q == ODP_QUEUE_INVALID) return -1; + qctx = odp_queue_context(q); + if (qctx) + odp_buffer_free(qctx->ctx_handle); return odp_queue_destroy(q); } @@ -952,6 +1077,9 @@ static int destroy_queues(void) } } + if (odp_pool_destroy(queue_ctx_pool) != 0) + return -1; + return 0; }