From patchwork Thu Apr 21 13:32:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Milard X-Patchwork-Id: 66382 Delivered-To: patch@linaro.org Received: by 10.140.93.198 with SMTP id d64csp116857qge; Thu, 21 Apr 2016 05:33:35 -0700 (PDT) X-Received: by 10.55.79.207 with SMTP id d198mr19668975qkb.49.1461242015881; Thu, 21 Apr 2016 05:33:35 -0700 (PDT) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id j11si489693qgd.1.2016.04.21.05.33.35; Thu, 21 Apr 2016 05:33:35 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 6956668AA3; Thu, 21 Apr 2016 12:33:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id EB46D686E2; Thu, 21 Apr 2016 12:33:28 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id A2F3168754; Thu, 21 Apr 2016 12:33:26 +0000 (UTC) Received: from mail-lf0-f49.google.com (mail-lf0-f49.google.com [209.85.215.49]) by lists.linaro.org (Postfix) with ESMTPS id 480B0683D3 for ; Thu, 21 Apr 2016 12:33:25 +0000 (UTC) Received: by mail-lf0-f49.google.com with SMTP id e190so59696410lfe.0 for ; Thu, 21 Apr 2016 05:33:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=mQp0X1FGe2mUj7L4LTWDMqxAypqPeJ83hGTiKvVKWg0=; b=FDxiP9cmcevfCHV5MwHR4hu94gpL0sk9hV3baLQbYyHg4r7iakeh9HTGCPiD2Kc4BV MOvywHMr290WaXEnLxAssSvvkn+aVpCHKCE8GUuaIPpPFz4sbAD8IXuOO8L40dXudjl5 RtLKnfJyGZJocMfQr52ifvTclxgzhRH30SAmNvnyvP5l6bRWgnx+JcxYJd6h3Yuk1Vqd jvmWksIHboKQc1HeqCJMoC9UMAmporlIxRzVfO45mNfPVbo9bRgewDoWi0gEDlqK7cz+ FxgAVaynRt/J4Ob8dC/4qpBRLiOTSJLZevyIbz8IW8WniV9dCAUq44EGZM1KPER2wYTU I79A== X-Gm-Message-State: AOPr4FXVXtYg5Qge0SKsChJwoSrygIygXrBkQr1Py5uNHcwXXvaI1R/h6FMeGYWXYSmmE60loUk= X-Received: by 10.25.218.75 with SMTP id r72mr6225353lfg.139.1461242004024; Thu, 21 Apr 2016 05:33:24 -0700 (PDT) Received: from erachmi-ericsson.ki.sw.ericsson.se (c-83-233-90-46.cust.bredband2.com. [83.233.90.46]) by smtp.gmail.com with ESMTPSA id c191sm522751lfb.29.2016.04.21.05.33.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 21 Apr 2016 05:33:23 -0700 (PDT) From: Christophe Milard To: mike.holmes@linaro.org, bill.fischofer@linaro.org, lng-odp@lists.linaro.org Date: Thu, 21 Apr 2016 15:32:31 +0200 Message-Id: <1461245551-9954-1-git-send-email-christophe.milard@linaro.org> X-Mailer: git-send-email 2.1.4 X-Topics: patch Subject: [lng-odp] [PATCH] validation: lock: tuning the iteration number X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" fixing: https://bugs.linaro.org/show_bug.cgi?id=2108 The no_lock_functional_test does not really tests the ODP functionality: Instead, it actually checks that race conditions can be created between concurrent running threads (by making these threads writing shared variables without lock, and later noting the value written was changed by some of the concurrent threads). This test therefore validates other tests: if this test passes -i.e. if we do have race condition- then if the following tests suppress these race conditions by using some synchronization mechanism, these synchronization mechanisms can be said to be efficient. If, on the other hand, the no_lock_functional_test "fails", it says that the following tests are really inconclusive as the effect of the tested synchronization mechanism is not proven. When running with valgrind, no_lock_functional_test failed, probably because the extra execution time introduced by valgrind itself made the chance to run the critical section of the different threads "at the same time" much less probable. The simple solution is to increase the critical section running time (by largely increasing the number of iterations performed). The solution taken here is actually to tune the critical section running time (currentely to ITER_MPLY_FACTOR=3 times the time needed to note the first race condition). This means that the test will take longer to run with valgrind, but will remain short without valgrind. Signed-off-by: Christophe Milard --- test/validation/lock/lock.c | 71 ++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 64 insertions(+), 7 deletions(-) diff --git a/test/validation/lock/lock.c b/test/validation/lock/lock.c index f1f6d69..515bc77 100644 --- a/test/validation/lock/lock.c +++ b/test/validation/lock/lock.c @@ -12,7 +12,10 @@ #include "lock.h" #define VERBOSE 0 -#define MAX_ITERATIONS 1000 + +#define MIN_ITERATIONS 1000 +#define MAX_ITERATIONS 30000 +#define ITER_MPLY_FACTOR 3 #define SLOW_BARRIER_DELAY 400 #define BASE_DELAY 6 @@ -325,6 +328,12 @@ static void *rwlock_recursive_api_tests(void *arg UNUSED) return NULL; } +/* + * Tests that we do have contention between threads when running. + * Also adjust the number of iterations to be done (by other tests) + * so we have a fair chance to see that the tested synchronizer + * does avoid the race condition. + */ static void *no_lock_functional_test(void *arg UNUSED) { global_shared_mem_t *global_mem; @@ -335,17 +344,36 @@ static void *no_lock_functional_test(void *arg UNUSED) thread_num = odp_cpu_id() + 1; per_thread_mem = thread_init(); global_mem = per_thread_mem->global_mem; - iterations = global_mem->g_iterations; + iterations = 0; odp_barrier_wait(&global_mem->global_barrier); sync_failures = 0; current_errs = 0; rs_idx = 0; - resync_cnt = iterations / NUM_RESYNC_BARRIERS; + resync_cnt = MAX_ITERATIONS / NUM_RESYNC_BARRIERS; lock_owner_delay = BASE_DELAY; - for (cnt = 1; cnt <= iterations; cnt++) { + /* + * Tunning the iteration number: + * Here, we search for an iteration number that guarantees to show + * race conditions between the odp threads. + * Iterations is set to ITER_MPLY_FACTOR * cnt where cnt is when + * the threads start to see "errors" (i.e. effect of other threads + * running concurrentely without any synchronisation mechanism). + * In other words, "iterations" is set to ITER_MPLY_FACTOR times the + * minimum loop count necessary to see a need for synchronisation + * mechanism. + * If, later, these "errors" disappear when running other tests up to + * "iterations" with synchro, the effect of the tested synchro mechanism + * is likely proven. + * If we reach "MAX_ITERATIONS", and "iteration" remains zero, + * it means that we cannot see any race condition between the different + * running theads (e.g. the OS is not preemptive) and all other tests + * being passed won't tell much about the functionality of the + * tested synchro mechanism. + */ + for (cnt = 1; cnt <= MAX_ITERATIONS; cnt++) { global_mem->global_lock_owner = thread_num; odp_mb_full(); thread_delay(per_thread_mem, lock_owner_delay); @@ -353,6 +381,8 @@ static void *no_lock_functional_test(void *arg UNUSED) if (global_mem->global_lock_owner != thread_num) { current_errs++; sync_failures++; + if (!iterations) + iterations = cnt; } global_mem->global_lock_owner = 0; @@ -362,6 +392,8 @@ static void *no_lock_functional_test(void *arg UNUSED) if (global_mem->global_lock_owner == thread_num) { current_errs++; sync_failures++; + if (!iterations) + iterations = cnt; } if (current_errs == 0) @@ -392,6 +424,31 @@ static void *no_lock_functional_test(void *arg UNUSED) */ CU_ASSERT(sync_failures != 0 || global_mem->g_num_threads == 1); + /* + * set the iterration for the future tests to be far above the + * contention level + */ + iterations *= ITER_MPLY_FACTOR; + + if (iterations > MAX_ITERATIONS) + iterations = MAX_ITERATIONS; + if (iterations < MIN_ITERATIONS) + iterations = MIN_ITERATIONS; + + /* + * Note that the following statement has race conditions: + * global_mem->g_iterations should really be an atomic and a TAS + * function be used. But this would mean that we would be testing + * synchronisers assuming synchronisers works... + * If we do not use atomic TAS, we may not get the grand max for + * all threads, but we are guaranteed to have passed the error + * threshold, for at least some threads, which is good enough + */ + if (iterations > global_mem->g_iterations) + global_mem->g_iterations = iterations; + + odp_mb_full(); + thread_finalize(per_thread_mem); return NULL; @@ -910,7 +967,7 @@ void lock_test_no_lock_functional(void) } odp_testinfo_t lock_suite_no_locking[] = { - ODP_TEST_INFO(lock_test_no_lock_functional), + ODP_TEST_INFO(lock_test_no_lock_functional), /* must be first */ ODP_TEST_INFO_NULL }; @@ -1082,7 +1139,7 @@ int lock_init(void) memset(global_mem, 0, sizeof(global_shared_mem_t)); global_mem->g_num_threads = MAX_WORKERS; - global_mem->g_iterations = MAX_ITERATIONS; + global_mem->g_iterations = 0; /* tuned by first test */ global_mem->g_verbose = VERBOSE; workers_count = odp_cpumask_default_worker(&mask, 0); @@ -1106,7 +1163,7 @@ int lock_init(void) odp_suiteinfo_t lock_suites[] = { {"nolocking", lock_suite_init, NULL, - lock_suite_no_locking}, + lock_suite_no_locking}, /* must be first */ {"spinlock", lock_suite_init, NULL, lock_suite_spinlock}, {"spinlock_recursive", lock_suite_init, NULL,