From patchwork Wed May 25 15:30:16 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Uvarov X-Patchwork-Id: 68614 Delivered-To: patch@linaro.org Received: by 10.140.92.199 with SMTP id b65csp1281143qge; Wed, 25 May 2016 08:31:13 -0700 (PDT) X-Received: by 10.233.223.67 with SMTP id t64mr4367496qkf.162.1464190273132; Wed, 25 May 2016 08:31:13 -0700 (PDT) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id g192si8401003qke.278.2016.05.25.08.31.12; Wed, 25 May 2016 08:31:13 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id ADE0D617E6; Wed, 25 May 2016 15:31:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 6F0DA616B1; Wed, 25 May 2016 15:31:07 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id CD7A3606D0; Wed, 25 May 2016 15:31:05 +0000 (UTC) Received: from mail-lb0-f180.google.com (mail-lb0-f180.google.com [209.85.217.180]) by lists.linaro.org (Postfix) with ESMTPS id 907FE606D0 for ; Wed, 25 May 2016 15:31:03 +0000 (UTC) Received: by mail-lb0-f180.google.com with SMTP id k7so16376940lbm.0 for ; Wed, 25 May 2016 08:31:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=OA5mjKZ/h/UActlYW8Q1T9alFfaP+WqC403YTXiEh6k=; b=TBANXi7UPQKKjGkWl78u2Xwu7JHMBxlKC69Sv5amk/t2TXEcxFW/2dQL5CTSTpvfO9 gGqfaUH8LqHE1gqaLKJO24WGK5CW6zzhwyphAEw1KxSF+CEI4gDjqXw0Dc3W9Nb+Nbkg jFJSlhiCF+tR0p7bqALiK/IQiW2VNwrcsNEdfEsErxKbBNLn5BYVpeVwMfOxVw+oVs8b +jDZsnnQ5hvBUC3ft571eTIyB18f9XKFmMLJHMSI/jbLPBAIt1Wn3m6AapBqIE2M/cw1 Rhje7a4IHvzgB89e6/HgTOCBhlzQew142WuMQOCWndrQc8k0e2g1eDUWGIaZ+DT4iYzn 0/LA== X-Gm-Message-State: ALyK8tLR8bU1Yo7BkUaTAL2M4Kn1P/n+3Cf3LGt3xKJBkY0q0rsvKhRnTYRXKBcb8t43+V/JQEI= X-Received: by 10.112.136.103 with SMTP id pz7mr1417935lbb.145.1464190262385; Wed, 25 May 2016 08:31:02 -0700 (PDT) Received: from localhost.localdomain (ppp91-77-173-31.pppoe.mtu-net.ru. [91.77.173.31]) by smtp.gmail.com with ESMTPSA id l10sm1509649lfb.47.2016.05.25.08.31.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 25 May 2016 08:31:01 -0700 (PDT) From: Maxim Uvarov To: lng-odp@lists.linaro.org Date: Wed, 25 May 2016 18:30:16 +0300 Message-Id: <1464190216-13226-1-git-send-email-maxim.uvarov@linaro.org> X-Mailer: git-send-email 2.7.1.250.gff4ea60 X-Topics: patch Subject: [lng-odp] [PATCHv2] linux-generic: test: fix ring resource leaks X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" Make test a little bit simple. Add memory free and take care about overflow using cast to int: (int32_t)odp_atomic_load_u32(consume_count) Where number of consumer threads can dequeue from ring and decrease atomic u32. Signed-off-by: Maxim Uvarov Reviewed-and-tested-by: Bill Fischofer --- v2: some cosmetic fixes: - remove not needed usleep(); - add barrier to run queue/deq to ring at the same time; - more accurate cast to (int32_t) instead of (int); Note: this test hangs with both -mcx16 and -O3 on gcc <= 4.9, that will be fixed in separate patch. platform/linux-generic/test/ring/ring_stress.c | 82 ++++++++++++++------------ 1 file changed, 43 insertions(+), 39 deletions(-) diff --git a/platform/linux-generic/test/ring/ring_stress.c b/platform/linux-generic/test/ring/ring_stress.c index c68419f..8b6d9ae 100644 --- a/platform/linux-generic/test/ring/ring_stress.c +++ b/platform/linux-generic/test/ring/ring_stress.c @@ -54,6 +54,9 @@ static odp_atomic_u32_t *retrieve_consume_count(void); static const char *ring_name = "stress ring"; static const char *consume_count_name = "stress ring consume count"; +/* barrier to run threads at the same time */ +static odp_barrier_t barrier; + int ring_test_stress_start(void) { odp_shm_t shared; @@ -120,6 +123,8 @@ void ring_test_stress_1_1_producer_consumer(void) */ odp_atomic_init_u32(consume_count, 1); + odp_barrier_init(&barrier, 2); + /* kick the workers */ odp_cunit_thread_create(stress_worker, &worker_param); @@ -156,12 +161,13 @@ void ring_test_stress_N_M_producer_consumer(void) consume_count = retrieve_consume_count(); CU_ASSERT(consume_count != NULL); - /* in N:M test case, producer threads are always - * greater or equal to consumer threads, thus produce - * enought "goods" to be consumed by consumer threads. + /* all producer threads try to fill ring to RING_SIZE, + * while consumers threads dequeue from ring with PIECE_BULK + * blocks. Multiply on 100 to add more tries. */ - odp_atomic_init_u32(consume_count, - (worker_param.numthrds) / 2); + odp_atomic_init_u32(consume_count, RING_SIZE / PIECE_BULK * 100); + + odp_barrier_init(&barrier, worker_param.numthrds); /* kick the workers */ odp_cunit_thread_create(stress_worker, &worker_param); @@ -202,8 +208,15 @@ static odp_atomic_u32_t *retrieve_consume_count(void) /* worker function for multiple producer instances */ static int do_producer(_ring_t *r) { - int i, result = 0; + int i; void **enq = NULL; + odp_atomic_u32_t *consume_count; + + consume_count = retrieve_consume_count(); + if (consume_count == NULL) { + LOG_ERR("cannot retrieve expected consume count.\n"); + return -1; + } /* allocate dummy object pointers for enqueue */ enq = malloc(PIECE_BULK * 2 * sizeof(void *)); @@ -216,26 +229,29 @@ static int do_producer(_ring_t *r) for (i = 0; i < PIECE_BULK; i++) enq[i] = (void *)(unsigned long)i; - do { - result = _ring_mp_enqueue_bulk(r, enq, PIECE_BULK); - if (0 == result) { - free(enq); - return 0; - } - usleep(10); /* wait for consumer threads */ - } while (!_ring_full(r)); + odp_barrier_wait(&barrier); + while ((int32_t)odp_atomic_load_u32(consume_count) > 0) { + /* produce as much data as we can to the ring */ + (void)_ring_mp_enqueue_bulk(r, enq, PIECE_BULK); + } + + free(enq); return 0; } /* worker function for multiple consumer instances */ static int do_consumer(_ring_t *r) { - int i, result = 0; + int i; void **deq = NULL; - odp_atomic_u32_t *consume_count = NULL; - const char *message = "test OK!"; - const char *mismatch = "data mismatch..lockless enq/deq failed."; + odp_atomic_u32_t *consume_count; + + consume_count = retrieve_consume_count(); + if (consume_count == NULL) { + LOG_ERR("cannot retrieve expected consume count.\n"); + return -1; + } /* allocate dummy object pointers for dequeue */ deq = malloc(PIECE_BULK * 2 * sizeof(void *)); @@ -244,31 +260,19 @@ static int do_consumer(_ring_t *r) return 0; /* not failure, skip for insufficient memory */ } - consume_count = retrieve_consume_count(); - if (consume_count == NULL) { - LOG_ERR("cannot retrieve expected consume count.\n"); - return -1; - } + odp_barrier_wait(&barrier); - while (odp_atomic_load_u32(consume_count) > 0) { - result = _ring_mc_dequeue_bulk(r, deq, PIECE_BULK); - if (0 == result) { - /* evaluate the data pattern */ - for (i = 0; i < PIECE_BULK; i++) { - if (deq[i] != (void *)(unsigned long)i) { - result = -1; - message = mismatch; - break; - } - } - - free(deq); - LOG_ERR("%s\n", message); + while ((int32_t)odp_atomic_load_u32(consume_count) > 0) { + if (!_ring_mc_dequeue_bulk(r, deq, PIECE_BULK)) { odp_atomic_dec_u32(consume_count); - return result; + + /* evaluate the data pattern */ + for (i = 0; i < PIECE_BULK; i++) + CU_ASSERT(deq[i] == (void *)(unsigned long)i); } - usleep(10); /* wait for producer threads */ } + + free(deq); return 0; }