From patchwork Tue May 24 20:46:21 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Uvarov X-Patchwork-Id: 68531 Delivered-To: patch@linaro.org Received: by 10.140.92.199 with SMTP id b65csp850740qge; Tue, 24 May 2016 13:47:18 -0700 (PDT) X-Received: by 10.140.106.166 with SMTP id e35mr132412qgf.79.1464122838080; Tue, 24 May 2016 13:47:18 -0700 (PDT) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id u28si4206187qkl.177.2016.05.24.13.47.17; Tue, 24 May 2016 13:47:18 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 5E1D4617D7; Tue, 24 May 2016 20:47:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 450DD617D0; Tue, 24 May 2016 20:47:12 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id BB750617D1; Tue, 24 May 2016 20:47:09 +0000 (UTC) Received: from mail-lb0-f172.google.com (mail-lb0-f172.google.com [209.85.217.172]) by lists.linaro.org (Postfix) with ESMTPS id B042A617C6 for ; Tue, 24 May 2016 20:47:08 +0000 (UTC) Received: by mail-lb0-f172.google.com with SMTP id h1so9317318lbj.3 for ; Tue, 24 May 2016 13:47:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=VoOymkwKfIkt5xx6jyjcRTTRc1z+i9nz50CIHGy/0Dk=; b=ZGSeWYZby1bafCa6IXG76heC7gT32Si49fKonH5EkRW98dGDHr3ziWB4i9XrI2EcXG JjrDsj1stagzyW4LTrzaHsTxSMjIlY74FufbrZZsSrdVEQGzzqsC117wYAvmv5I+WGb3 WPRF0S06xTJA+yMGzMsH5vq6eY0YOn8JLtP0daJ7RrxLtqPvuROS9F23Y5M78grkD0a2 HV0mnZQfwfXWmcfERB1LI5/yEKmYTfNbbj09KMeNKdLcV8xrjIxHvK0ABsfdBF8ObMwG X/lZYPzPWhd7jPdFH6ouISFJ/5zKQTvjK2Ygz7QTYbWTLTZLOTI01vgKRlhXLzgdCCFG +KuA== X-Gm-Message-State: ALyK8tIP95Losx1Hzi1z6QtrWoclkG8KqjgOTRUYqaaw83o1BVrFx1vvneTdwrGIxTlz5j2r9pk= X-Received: by 10.112.12.65 with SMTP id w1mr51626lbb.76.1464122827122; Tue, 24 May 2016 13:47:07 -0700 (PDT) Received: from localhost.localdomain (ppp91-77-173-31.pppoe.mtu-net.ru. [91.77.173.31]) by smtp.gmail.com with ESMTPSA id wc3sm767310lbb.27.2016.05.24.13.47.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 24 May 2016 13:47:06 -0700 (PDT) From: Maxim Uvarov To: lng-odp@lists.linaro.org Date: Tue, 24 May 2016 23:46:21 +0300 Message-Id: <1464122781-11280-1-git-send-email-maxim.uvarov@linaro.org> X-Mailer: git-send-email 2.7.1.250.gff4ea60 X-Topics: patch Subject: [lng-odp] [PATCH] linux-generic: test: fix ring resource leaks X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" Make test a little bit simple. Add memory free and take care about overflow using cast to int: (int)odp_atomic_load_u32(consume_count) Where number of consumer threads can dequeue from ring and decrease atomic u32. Signed-off-by: Maxim Uvarov --- platform/linux-generic/test/ring/ring_stress.c | 74 ++++++++++++-------------- 1 file changed, 34 insertions(+), 40 deletions(-) diff --git a/platform/linux-generic/test/ring/ring_stress.c b/platform/linux-generic/test/ring/ring_stress.c index c68419f..a7e89a8 100644 --- a/platform/linux-generic/test/ring/ring_stress.c +++ b/platform/linux-generic/test/ring/ring_stress.c @@ -156,12 +156,11 @@ void ring_test_stress_N_M_producer_consumer(void) consume_count = retrieve_consume_count(); CU_ASSERT(consume_count != NULL); - /* in N:M test case, producer threads are always - * greater or equal to consumer threads, thus produce - * enought "goods" to be consumed by consumer threads. + /* all producer threads try to fill ring to RING_SIZE, + * while consumers threads dequeue from ring with PIECE_BULK + * blocks. Multiply on 100 to add more tries. */ - odp_atomic_init_u32(consume_count, - (worker_param.numthrds) / 2); + odp_atomic_init_u32(consume_count, RING_SIZE / PIECE_BULK * 100); /* kick the workers */ odp_cunit_thread_create(stress_worker, &worker_param); @@ -202,8 +201,15 @@ static odp_atomic_u32_t *retrieve_consume_count(void) /* worker function for multiple producer instances */ static int do_producer(_ring_t *r) { - int i, result = 0; + int i; void **enq = NULL; + odp_atomic_u32_t *consume_count; + + consume_count = retrieve_consume_count(); + if (consume_count == NULL) { + LOG_ERR("cannot retrieve expected consume count.\n"); + return -1; + } /* allocate dummy object pointers for enqueue */ enq = malloc(PIECE_BULK * 2 * sizeof(void *)); @@ -216,26 +222,28 @@ static int do_producer(_ring_t *r) for (i = 0; i < PIECE_BULK; i++) enq[i] = (void *)(unsigned long)i; - do { - result = _ring_mp_enqueue_bulk(r, enq, PIECE_BULK); - if (0 == result) { - free(enq); - return 0; - } - usleep(10); /* wait for consumer threads */ - } while (!_ring_full(r)); + while ((int)odp_atomic_load_u32(consume_count) > 0) { + /* produce as much data as we can to the ring */ + if (!_ring_mp_enqueue_bulk(r, enq, PIECE_BULK)) + usleep(10); + } + free(enq); return 0; } /* worker function for multiple consumer instances */ static int do_consumer(_ring_t *r) { - int i, result = 0; + int i; void **deq = NULL; - odp_atomic_u32_t *consume_count = NULL; - const char *message = "test OK!"; - const char *mismatch = "data mismatch..lockless enq/deq failed."; + odp_atomic_u32_t *consume_count; + + consume_count = retrieve_consume_count(); + if (consume_count == NULL) { + LOG_ERR("cannot retrieve expected consume count.\n"); + return -1; + } /* allocate dummy object pointers for dequeue */ deq = malloc(PIECE_BULK * 2 * sizeof(void *)); @@ -244,31 +252,17 @@ static int do_consumer(_ring_t *r) return 0; /* not failure, skip for insufficient memory */ } - consume_count = retrieve_consume_count(); - if (consume_count == NULL) { - LOG_ERR("cannot retrieve expected consume count.\n"); - return -1; - } - - while (odp_atomic_load_u32(consume_count) > 0) { - result = _ring_mc_dequeue_bulk(r, deq, PIECE_BULK); - if (0 == result) { - /* evaluate the data pattern */ - for (i = 0; i < PIECE_BULK; i++) { - if (deq[i] != (void *)(unsigned long)i) { - result = -1; - message = mismatch; - break; - } - } - - free(deq); - LOG_ERR("%s\n", message); + while ((int)odp_atomic_load_u32(consume_count) > 0) { + if (!_ring_mc_dequeue_bulk(r, deq, PIECE_BULK)) { odp_atomic_dec_u32(consume_count); - return result; + + /* evaluate the data pattern */ + for (i = 0; i < PIECE_BULK; i++) + CU_ASSERT(deq[i] == (void *)(unsigned long)i); } - usleep(10); /* wait for producer threads */ } + + free(deq); return 0; }