From patchwork Fri Jan 13 07:55:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Milard X-Patchwork-Id: 91271 Delivered-To: patch@linaro.org Received: by 10.140.20.99 with SMTP id 90csp56640qgi; Thu, 12 Jan 2017 23:00:14 -0800 (PST) X-Received: by 10.237.49.225 with SMTP id 88mr16118739qth.120.1484290814581; Thu, 12 Jan 2017 23:00:14 -0800 (PST) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id j29si7817420qta.132.2017.01.12.23.00.14; Thu, 12 Jan 2017 23:00:14 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 32EDB60825; Fri, 13 Jan 2017 07:00:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 1AB8460C4E; Fri, 13 Jan 2017 06:57:20 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 75AE7608C5; Fri, 13 Jan 2017 06:57:04 +0000 (UTC) Received: from mail-lf0-f49.google.com (mail-lf0-f49.google.com [209.85.215.49]) by lists.linaro.org (Postfix) with ESMTPS id 3C37960902 for ; Fri, 13 Jan 2017 06:56:35 +0000 (UTC) Received: by mail-lf0-f49.google.com with SMTP id z134so26038535lff.3 for ; Thu, 12 Jan 2017 22:56:35 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tNqQ8XCNG7TTdJUM71yZ5raPSQNfjlihz9pnrXQ20V4=; b=V22E+tl/ohPJFXhXxSSys6sRXf+BP1IMi/Ieb7SZfHJBkGnJFMKwqXlBuGRMaOsqWB BwuCqrLzftENGGDs5FQgS22NC+l5VPdBhR50o544KnePihoQvzTBjMwGoCyPFzCib19c 3VxlY1bd4HFLy3g42p8OWdVIsZG4e7MevnKb6rqGxtu+JvQDQuwnbOHNl0xvJULlFw8Y JKX4CMa71GVD0RzMMlm/gSTBmOxxv/YRbXQz3wu07QwTXiO2sTgYZHhQ+hxoZU2tm6hv JdLmt/cScs8ZJovmk9h5S5kSgPCw8M1Dv3CnHJuUNFLFyIPWbuYnkexQV6p/KPnzb7Pp bi7Q== X-Gm-Message-State: AIkVDXJs1jkQUKA7fngi2v7YtttfWkNLj0InvwVj0eORcLpX5vkEU5L/IjP0NRpZEuOUIljXmyE= X-Received: by 10.25.221.71 with SMTP id u68mr6904827lfg.52.1484290594015; Thu, 12 Jan 2017 22:56:34 -0800 (PST) Received: from erachmi-ericsson.ki.sw.ericsson.se (c-83-233-76-66.cust.bredband2.com. [83.233.76.66]) by smtp.gmail.com with ESMTPSA id e7sm3327034lfb.10.2017.01.12.22.56.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 12 Jan 2017 22:56:33 -0800 (PST) From: Christophe Milard To: mike.holmes@linaro.org, bill.fischofer@linaro.org, yi.he@linaro.org, forrest.shi@linaro.org, francois.ozog@linaro.org, lng-odp@lists.linaro.org Date: Fri, 13 Jan 2017 08:55:43 +0100 Message-Id: <1484294143-29882-7-git-send-email-christophe.milard@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1484294143-29882-1-git-send-email-christophe.milard@linaro.org> References: <1484294143-29882-1-git-send-email-christophe.milard@linaro.org> Subject: [lng-odp] [API-NEXT PATCHv7 6/6] test: drv: shm: adding buddy allocation stress tests X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" Stress tests for the random size allocator (buddy allocator in linux-generic) are added here. Signed-off-by: Christophe Milard --- .../common_plat/validation/drv/drvshmem/drvshmem.c | 177 +++++++++++++++++++++ .../common_plat/validation/drv/drvshmem/drvshmem.h | 1 + 2 files changed, 178 insertions(+) -- 2.7.4 diff --git a/test/common_plat/validation/drv/drvshmem/drvshmem.c b/test/common_plat/validation/drv/drvshmem/drvshmem.c index d4dedea..0f882ae 100644 --- a/test/common_plat/validation/drv/drvshmem/drvshmem.c +++ b/test/common_plat/validation/drv/drvshmem/drvshmem.c @@ -938,6 +938,182 @@ void drvshmem_test_slab_basic(void) odpdrv_shm_pool_destroy(pool); } +/* + * thread part for the drvshmem_test_buddy_stress + */ +static int run_test_buddy_stress(void *arg ODP_UNUSED) +{ + odpdrv_shm_t shm; + odpdrv_shm_pool_t pool; + uint8_t *address; + shared_test_data_t *glob_data; + uint8_t random_bytes[STRESS_RANDOM_SZ]; + uint32_t index; + uint32_t size; + uint8_t data; + uint32_t iter; + uint32_t i; + + shm = odpdrv_shm_lookup_by_name(MEM_NAME); + glob_data = odpdrv_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(glob_data); + + /* get the pool to test */ + pool = odpdrv_shm_pool_lookup(POOL_NAME); + + /* wait for general GO! */ + odpdrv_barrier_wait(&glob_data->test_barrier1); + /* + + * at each iteration: pick up a random index for + * glob_data->stress[index]: If the entry is free, allocated small mem + * randomly. If it is already allocated, make checks and free it: + * Note that different tread can allocate or free a given block + */ + for (iter = 0; iter < STRESS_ITERATION; iter++) { + /* get 4 random bytes from which index, size ,align, flags + * and data will be derived: + */ + odp_random_data(random_bytes, STRESS_RANDOM_SZ, 0); + index = random_bytes[0] & (STRESS_SIZE - 1); + + odp_spinlock_lock(&glob_data->stress_lock); + + switch (glob_data->stress[index].state) { + case STRESS_FREE: + /* allocated a new block for this entry */ + + glob_data->stress[index].state = STRESS_BUSY; + odp_spinlock_unlock(&glob_data->stress_lock); + + size = (random_bytes[1] + 1) << 4; /* up to 4Kb */ + data = random_bytes[2]; + + address = odpdrv_shm_pool_alloc(pool, size); + glob_data->stress[index].address = address; + if (address == NULL) { /* out of mem ? */ + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_ALLOC; + odp_spinlock_unlock(&glob_data->stress_lock); + continue; + } + + glob_data->stress[index].size = size; + glob_data->stress[index].data_val = data; + + /* write some data: */ + for (i = 0; i < size; i++) + address[i] = (data++) & 0xFF; + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_ALLOC; + odp_spinlock_unlock(&glob_data->stress_lock); + + break; + + case STRESS_ALLOC: + /* free the block for this entry */ + + glob_data->stress[index].state = STRESS_BUSY; + odp_spinlock_unlock(&glob_data->stress_lock); + address = glob_data->stress[index].address; + + if (shm == NULL) { /* out of mem ? */ + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_FREE; + odp_spinlock_unlock(&glob_data->stress_lock); + continue; + } + + /* check that data is reachable and correct: */ + data = glob_data->stress[index].data_val; + size = glob_data->stress[index].size; + for (i = 0; i < size; i++) { + CU_ASSERT(address[i] == (data & 0xFF)); + data++; + } + + odpdrv_shm_pool_free(pool, address); + + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_FREE; + odp_spinlock_unlock(&glob_data->stress_lock); + + break; + + case STRESS_BUSY: + default: + odp_spinlock_unlock(&glob_data->stress_lock); + break; + } + } + + fflush(stdout); + return CU_get_number_of_failures(); +} + +/* + * stress tests + */ +void drvshmem_test_buddy_stress(void) +{ + odpdrv_shm_pool_param_t pool_params; + odpdrv_shm_pool_t pool; + pthrd_arg thrdarg; + odpdrv_shm_t shm; + shared_test_data_t *glob_data; + odp_cpumask_t unused; + uint32_t i; + uint8_t *address; + + /* create a pool and check that it can be looked up */ + pool_params.pool_size = POOL_SZ; + pool_params.min_alloc = 0; + pool_params.max_alloc = POOL_SZ; + pool = odpdrv_shm_pool_create(POOL_NAME, &pool_params); + odpdrv_shm_pool_print("Stress test start", pool); + + shm = odpdrv_shm_reserve(MEM_NAME, sizeof(shared_test_data_t), + 0, ODPDRV_SHM_LOCK); + CU_ASSERT(ODPDRV_SHM_INVALID != shm); + glob_data = odpdrv_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(glob_data); + + thrdarg.numthrds = odp_cpumask_default_worker(&unused, 0); + if (thrdarg.numthrds > MAX_WORKERS) + thrdarg.numthrds = MAX_WORKERS; + + glob_data->nb_threads = thrdarg.numthrds; + odpdrv_barrier_init(&glob_data->test_barrier1, thrdarg.numthrds); + odp_spinlock_init(&glob_data->stress_lock); + + /* before starting the threads, mark all entries as free: */ + for (i = 0; i < STRESS_SIZE; i++) + glob_data->stress[i].state = STRESS_FREE; + + /* create threads */ + odp_cunit_thread_create(run_test_buddy_stress, &thrdarg); + + /* wait for all thread endings: */ + CU_ASSERT(odp_cunit_thread_exit(&thrdarg) >= 0); + + odpdrv_shm_pool_print("Stress test all thread finished", pool); + + /* release left overs: */ + for (i = 0; i < STRESS_SIZE; i++) { + address = glob_data->stress[i].address; + if (glob_data->stress[i].state == STRESS_ALLOC) + odpdrv_shm_pool_free(pool, address); + } + + CU_ASSERT(0 == odpdrv_shm_free_by_name(MEM_NAME)); + + /* check that no memory is left over: */ + odpdrv_shm_pool_print("Stress test all released", pool); + + /* destroy pool: */ + odpdrv_shm_pool_destroy(pool); +} + odp_testinfo_t drvshmem_suite[] = { ODP_TEST_INFO(drvshmem_test_basic), ODP_TEST_INFO(drvshmem_test_reserve_after_fork), @@ -945,6 +1121,7 @@ odp_testinfo_t drvshmem_suite[] = { ODP_TEST_INFO(drvshmem_test_stress), ODP_TEST_INFO(drvshmem_test_buddy_basic), ODP_TEST_INFO(drvshmem_test_slab_basic), + ODP_TEST_INFO(drvshmem_test_buddy_stress), ODP_TEST_INFO_NULL, }; diff --git a/test/common_plat/validation/drv/drvshmem/drvshmem.h b/test/common_plat/validation/drv/drvshmem/drvshmem.h index fdc1080..817b3d5 100644 --- a/test/common_plat/validation/drv/drvshmem/drvshmem.h +++ b/test/common_plat/validation/drv/drvshmem/drvshmem.h @@ -16,6 +16,7 @@ void drvshmem_test_singleva_after_fork(void); void drvshmem_test_stress(void); void drvshmem_test_buddy_basic(void); void drvshmem_test_slab_basic(void); +void drvshmem_test_buddy_stress(void); /* test arrays: */ extern odp_testinfo_t drvshmem_suite[];