From patchwork Tue Nov 10 04:20:11 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 56283 Delivered-To: patch@linaro.org Received: by 10.112.155.196 with SMTP id vy4csp551126lbb; Mon, 9 Nov 2015 20:26:35 -0800 (PST) X-Received: by 10.107.7.210 with SMTP id g79mr1789641ioi.81.1447129594861; Mon, 09 Nov 2015 20:26:34 -0800 (PST) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id e6si2323570ioe.27.2015.11.09.20.26.29; Mon, 09 Nov 2015 20:26:34 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dkim=neutral (body hash did not verify) header.i=@linaro_org.20150623.gappssmtp.com Received: by lists.linaro.org (Postfix, from userid 109) id B682761CB3; Tue, 10 Nov 2015 04:26:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-1.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_BL_SPAMCOP_NET, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, T_DKIM_INVALID,URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id DB8D061CEF; Tue, 10 Nov 2015 04:21:54 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id C5F4761CEF; Tue, 10 Nov 2015 04:21:49 +0000 (UTC) Received: from mail-pa0-f45.google.com (mail-pa0-f45.google.com [209.85.220.45]) by lists.linaro.org (Postfix) with ESMTPS id CCFB461CF0 for ; Tue, 10 Nov 2015 04:20:31 +0000 (UTC) Received: by pasz6 with SMTP id z6so228056301pas.2 for ; Mon, 09 Nov 2015 20:20:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro_org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=QgjjIuhNNxWvlcKJuw2RqKunKhh1o4izkPk25BU89HY=; b=wGStuoU0pKBWXXlPlIsz1UJLWDcTlrMxIsCGs0xySMrk7oidy41n5g4PVOS6kdXigQ Hej+29jcuxq5xRMLUXsICpTDoG1V2OoEwD444Uam4Vck6A02XyAYIEYfM6mDpmv1QfaX jzP12IpIJ1wMTnQXBSePY6U5I/Nb2kqMnJkXDpQ5bhGlRBuJMeHZY4YTSui36MEJleoD HCkpZOlhtJa5LIQtcSbc5TYawPFxiKtEnIRWRpzWXMFg1m9xq6368+ShNyPiDW9njPeu 1lYYS04GHv3r7YtOkFdj50pY/TLQEf2/xA2e+jc6AAxTnrkzKZEhI2bVsXHg/M/G1jFC Cxug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=QgjjIuhNNxWvlcKJuw2RqKunKhh1o4izkPk25BU89HY=; b=MJbq7oANwo1jTIDRHAQoDF1ZkOTfzVHnzTp1KUtP6gOXDn8N9rb53SsIe/WaZ4ASAI IjI+ZL1ZRGlrb7GVyDG/Qr5Av3em7it9o9s5oXEZdsUuqjDK7Ytlo+eNyA5sAIpy5SH5 U3s1uL2X0qC5MvJBZm4N0+2UxAV36lH4qtlUK6Y9MCe3+zRLxbFbdbpvS6vK4r+8SI2a nVhCFvq1aDe8/QAtQqJqJBDqU+Fcg6tE5EvaLLDt652Tx8nGR/xY79EJGYZkxrXnmork FVuru4epY/NUNWDKRQxXz/x23OigXzcsDjg3Ch3qnC+YmBtps0r5Vz1eBsVm2lyR54Xd zUiA== X-Gm-Message-State: ALoCoQlNiNg11swL5HIQbl/9WRm2+WC1NNf0NkKJWIuv8i1LZk0GF+iVZ8NNFpBe68I9K3tNuohv X-Received: by 10.66.124.232 with SMTP id ml8mr2348278pab.91.1447129231091; Mon, 09 Nov 2015 20:20:31 -0800 (PST) Received: from Ubuntu15.localdomain ([40.139.248.3]) by smtp.gmail.com with ESMTPSA id fl5sm1137315pbd.70.2015.11.09.20.20.30 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 09 Nov 2015 20:20:30 -0800 (PST) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Mon, 9 Nov 2015 20:20:11 -0800 Message-Id: <1447129211-9095-9-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1447129211-9095-1-git-send-email-bill.fischofer@linaro.org> References: <1447129211-9095-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [API-NEXT PATCHv3 8/8] validation: schedule: add chaos test X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" Add a "chaos" test variant to the scheduler CUnit tests. This test stresses the scheduler by circulating events among parallel, atomic, and ordered queues to verify that the scheduler can handle arbitrary looping paths without deadlock. Suggested-by: Carl Wallen Signed-off-by: Bill Fischofer --- test/validation/scheduler/scheduler.c | 192 ++++++++++++++++++++++++++++++++++ test/validation/scheduler/scheduler.h | 1 + 2 files changed, 193 insertions(+) diff --git a/test/validation/scheduler/scheduler.c b/test/validation/scheduler/scheduler.c index 042d7b4..c483fdd 100644 --- a/test/validation/scheduler/scheduler.c +++ b/test/validation/scheduler/scheduler.c @@ -39,6 +39,12 @@ #define MAGIC1 0xdeadbeef #define MAGIC2 0xcafef00d +#define CHAOS_NUM_QUEUES 6 +#define CHAOS_NUM_BUFS_PER_QUEUE 6 +#define CHAOS_NUM_ROUNDS 50000 +#define CHAOS_NUM_EVENTS (CHAOS_NUM_QUEUES * CHAOS_NUM_BUFS_PER_QUEUE) +#define CHAOS_DEBUG (CHAOS_NUM_ROUNDS < 1000) + /* Test global variables */ typedef struct { int num_workers; @@ -47,6 +53,11 @@ typedef struct { int buf_count_cpy; odp_ticketlock_t lock; odp_spinlock_t atomic_lock; + struct { + odp_queue_t handle; + char name[ODP_QUEUE_NAME_LEN]; + } chaos_q[CHAOS_NUM_QUEUES]; + int chaos_pending_event_count; } test_globals_t; typedef struct { @@ -74,6 +85,11 @@ typedef struct { uint64_t lock_sequence[ODP_CONFIG_MAX_ORDERED_LOCKS_PER_QUEUE]; } queue_context; +typedef struct { + uint64_t evno; + uint64_t seqno; +} chaos_buf; + odp_pool_t pool; odp_pool_t queue_ctx_pool; @@ -381,6 +397,181 @@ void scheduler_test_groups(void) CU_ASSERT_FATAL(odp_pool_destroy(p) == 0); } +static void *chaos_thread(void *arg) +{ + uint64_t i; + int rc; + chaos_buf *cbuf; + odp_event_t ev; + odp_queue_t from; + thread_args_t *args = (thread_args_t *)arg; + test_globals_t *globals = args->globals; + int me = odp_thread_id(); + + if (CHAOS_DEBUG) + printf("Chaos thread %d starting...\n", me); + + /* Wait for all threads to start */ + odp_barrier_wait(&globals->barrier); + + /* Run the test */ + for (i = 0; i < CHAOS_NUM_ROUNDS * CHAOS_NUM_EVENTS; i++) { + ev = odp_schedule(&from, ODP_SCHED_WAIT); + CU_ASSERT_FATAL(ev != ODP_EVENT_INVALID); + cbuf = odp_buffer_addr(odp_buffer_from_event(ev)); + CU_ASSERT_FATAL(cbuf != NULL); + if (CHAOS_DEBUG) + printf("Thread %d received event %lu seq %lu " + "from Q %s, sending to Q %s\n", + me, cbuf->evno, cbuf->seqno, + globals-> + chaos_q[(uint64_t)odp_queue_context(from)].name, + globals-> + chaos_q[cbuf->seqno % CHAOS_NUM_QUEUES].name); + + rc = odp_queue_enq( + globals-> + chaos_q[cbuf->seqno++ % CHAOS_NUM_QUEUES].handle, + ev); + CU_ASSERT(rc == 0); + } + + if (CHAOS_DEBUG) + printf("Thread %d completed %d rounds...terminating\n", + odp_thread_id(), CHAOS_NUM_EVENTS); + + /* Thread complete--drain locally cached scheduled events */ + odp_schedule_pause(); + + while (globals->chaos_pending_event_count > 0) { + ev = odp_schedule(&from, ODP_SCHED_NO_WAIT); + if (ev == ODP_EVENT_INVALID) + break; + globals->chaos_pending_event_count--; + cbuf = odp_buffer_addr(odp_buffer_from_event(ev)); + if (CHAOS_DEBUG) + printf("Thread %d drained event %lu seq %lu " + "from Q %s\n", + odp_thread_id(), cbuf->evno, cbuf->seqno, + globals-> + chaos_q[(uint64_t)odp_queue_context(from)].name); + odp_event_free(ev); + } + + return NULL; +} + +void scheduler_test_chaos(void) +{ + odp_pool_t pool; + odp_pool_param_t params; + odp_queue_param_t qp; + odp_buffer_t buf; + chaos_buf *cbuf; + odp_event_t ev; + test_globals_t *globals; + thread_args_t *args; + odp_shm_t shm; + odp_queue_t from; + int i, rc; + odp_schedule_sync_t sync[] = {ODP_SCHED_SYNC_NONE, + ODP_SCHED_SYNC_ATOMIC, + ODP_SCHED_SYNC_ORDERED}; + const unsigned num_sync = (sizeof(sync) / sizeof(sync[0])); + const char *const qtypes[] = {"parallel", "atomic", "ordered"}; + + /* Set up the scheduling environment */ + shm = odp_shm_lookup(GLOBALS_SHM_NAME); + CU_ASSERT_FATAL(shm != ODP_SHM_INVALID); + globals = odp_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL_FATAL(shm); + + shm = odp_shm_lookup(SHM_THR_ARGS_NAME); + CU_ASSERT_FATAL(shm != ODP_SHM_INVALID); + args = odp_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL_FATAL(args); + + args->globals = globals; + args->cu_thr.numthrds = globals->num_workers; + + odp_queue_param_init(&qp); + odp_pool_param_init(¶ms); + params.buf.size = sizeof(chaos_buf); + params.buf.align = 0; + params.buf.num = CHAOS_NUM_EVENTS; + params.type = ODP_POOL_BUFFER; + + pool = odp_pool_create("sched_chaos_pool", ¶ms); + CU_ASSERT_FATAL(pool != ODP_POOL_INVALID); + qp.sched.prio = ODP_SCHED_PRIO_DEFAULT; + + for (i = 0; i < CHAOS_NUM_QUEUES; i++) { + qp.sched.sync = sync[i % num_sync]; + snprintf(globals->chaos_q[i].name, + sizeof(globals->chaos_q[i].name), + "chaos queue %d - %s", i, + qtypes[i % num_sync]); + globals->chaos_q[i].handle = + odp_queue_create(globals->chaos_q[i].name, + ODP_QUEUE_TYPE_SCHED, + &qp); + CU_ASSERT_FATAL(globals->chaos_q[i].handle != + ODP_QUEUE_INVALID); + rc = odp_queue_context_set(globals->chaos_q[i].handle, + (void *)(uint64_t)i); + CU_ASSERT_FATAL(rc == 0); + } + + /* Now populate the queues with the initial seed elements */ + for (i = 0; i < CHAOS_NUM_EVENTS; i++) { + buf = odp_buffer_alloc(pool); + CU_ASSERT_FATAL(buf != ODP_BUFFER_INVALID); + cbuf = odp_buffer_addr(buf); + cbuf->evno = i; + cbuf->seqno = 0; + rc = odp_queue_enq( + globals->chaos_q[i % CHAOS_NUM_QUEUES].handle, + odp_buffer_to_event(buf)); + CU_ASSERT_FATAL(rc == 0); + globals->chaos_pending_event_count++; + } + + /* Run the test */ + odp_cunit_thread_create(chaos_thread, &args->cu_thr); + odp_cunit_thread_exit(&args->cu_thr); + + if (CHAOS_DEBUG) + printf("Thread %d returning from chaos threads..cleaning up\n", + odp_thread_id()); + + /* Cleanup: Drain queues, free events */ + while (globals->chaos_pending_event_count-- > 0) { + ev = odp_schedule(&from, ODP_SCHED_WAIT); + CU_ASSERT_FATAL(ev != ODP_EVENT_INVALID); + cbuf = odp_buffer_addr(odp_buffer_from_event(ev)); + if (CHAOS_DEBUG) + printf("Draining event %lu seq %lu from Q %s...\n", + cbuf->evno, + cbuf->seqno, + globals-> + chaos_q[(uint64_t)odp_queue_context(from)].name); + odp_event_free(ev); + } + + odp_schedule_release_ordered(); + + for (i = 0; i < CHAOS_NUM_QUEUES; i++) { + if (CHAOS_DEBUG) + printf("Destroying queue %s\n", + globals->chaos_q[i].name); + rc = odp_queue_destroy(globals->chaos_q[i].handle); + CU_ASSERT(rc == 0); + } + + rc = odp_pool_destroy(pool); + CU_ASSERT(rc == 0); +} + static void *schedule_common_(void *arg) { thread_args_t *args = (thread_args_t *)arg; @@ -1265,6 +1456,7 @@ odp_testinfo_t scheduler_suite[] = { ODP_TEST_INFO(scheduler_test_num_prio), ODP_TEST_INFO(scheduler_test_queue_destroy), ODP_TEST_INFO(scheduler_test_groups), + ODP_TEST_INFO(scheduler_test_chaos), ODP_TEST_INFO(scheduler_test_1q_1t_n), ODP_TEST_INFO(scheduler_test_1q_1t_a), ODP_TEST_INFO(scheduler_test_1q_1t_o), diff --git a/test/validation/scheduler/scheduler.h b/test/validation/scheduler/scheduler.h index c869e41..bba79aa 100644 --- a/test/validation/scheduler/scheduler.h +++ b/test/validation/scheduler/scheduler.h @@ -14,6 +14,7 @@ void scheduler_test_wait_time(void); void scheduler_test_num_prio(void); void scheduler_test_queue_destroy(void); void scheduler_test_groups(void); +void scheduler_test_chaos(void); void scheduler_test_1q_1t_n(void); void scheduler_test_1q_1t_a(void); void scheduler_test_1q_1t_o(void);