From patchwork Sat Jul 18 20:03:42 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 51256 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f199.google.com (mail-wi0-f199.google.com [209.85.212.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3E30F22A28 for ; Sat, 18 Jul 2015 20:08:54 +0000 (UTC) Received: by widjy10 with SMTP id jy10sf18465102wid.3 for ; Sat, 18 Jul 2015 13:08:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:in-reply-to:references:cc:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:errors-to :sender:x-original-sender:x-original-authentication-results :mailing-list; bh=5EPEWYtdX0rM+p4e9AJvRFWKKKl5hecyFYOLf478Gbo=; b=Wnrwf6NDafEy41NIzZ88ta+X0DGnmDU9ZGSh5h2q8WvDg0cSoGeI1urM0qz31RIU0c Hwoe8UqSlfnILjVVK89J2lSIL29HbTB1OjoFSLSgK6j/5AJGndp+Lqch7ouGYyoaeYOk KCP/DnjYy2+OJ8ENEuCTKc8WcbSfXALJj2TwL8FSrldGJ1O4GCzozPr6dnJRobgQSa89 5lqnhJkmcByuD9o/yVQszHPKXbusisu3lNAsdO8M12ouaKNUj/5TsPMUpoxzK6G5tzW0 knnL7BwGqXJ5V435Lg7LGD47zeGeVnjuLFX6pKJ9U2X0PoKp/gwZTdxbdhpBjjMRhR42 E5tg== X-Gm-Message-State: ALoCoQn1gysdoCia+c7a69QB1N751XOs1gE3sBL0uJxdaz57ELE/cuUZqS6w4jlSghIYrXp0wVH6 X-Received: by 10.112.139.137 with SMTP id qy9mr11038730lbb.17.1437250133508; Sat, 18 Jul 2015 13:08:53 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.219.4 with SMTP id pk4ls661406lac.90.gmail; Sat, 18 Jul 2015 13:08:53 -0700 (PDT) X-Received: by 10.152.19.67 with SMTP id c3mr20371032lae.4.1437250133351; Sat, 18 Jul 2015 13:08:53 -0700 (PDT) Received: from mail-la0-f51.google.com (mail-la0-f51.google.com. [209.85.215.51]) by mx.google.com with ESMTPS id ji5si7733396lbb.129.2015.07.18.13.08.53 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 18 Jul 2015 13:08:53 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) client-ip=209.85.215.51; Received: by laem6 with SMTP id m6so77275325lae.0 for ; Sat, 18 Jul 2015 13:08:52 -0700 (PDT) X-Received: by 10.152.207.76 with SMTP id lu12mr7972691lac.29.1437250132878; Sat, 18 Jul 2015 13:08:52 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp534836lbb; Sat, 18 Jul 2015 13:08:51 -0700 (PDT) X-Received: by 10.55.15.89 with SMTP id z86mr6918403qkg.75.1437250131022; Sat, 18 Jul 2015 13:08:51 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id f129si18619615qhc.11.2015.07.18.13.08.50; Sat, 18 Jul 2015 13:08:50 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id 1861661CAF; Sat, 18 Jul 2015 20:08:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from ip-10-142-244-252.ec2.internal (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id B301761D2B; Sat, 18 Jul 2015 20:05:23 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 4169A61CF9; Sat, 18 Jul 2015 20:04:36 +0000 (UTC) Received: from mail-ob0-f182.google.com (mail-ob0-f182.google.com [209.85.214.182]) by lists.linaro.org (Postfix) with ESMTPS id 09E4061D18 for ; Sat, 18 Jul 2015 20:04:05 +0000 (UTC) Received: by obre1 with SMTP id e1so83024925obr.1 for ; Sat, 18 Jul 2015 13:04:04 -0700 (PDT) X-Received: by 10.202.190.11 with SMTP id o11mr18126394oif.20.1437249844515; Sat, 18 Jul 2015 13:04:04 -0700 (PDT) Received: from localhost.localdomain (cpe-24-28-70-239.austin.res.rr.com. [24.28.70.239]) by smtp.gmail.com with ESMTPSA id y5sm8537998oes.15.2015.07.18.13.04.03 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 18 Jul 2015 13:04:04 -0700 (PDT) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Sat, 18 Jul 2015 15:03:42 -0500 Message-Id: <1437249827-578-8-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1437249827-578-1-git-send-email-bill.fischofer@linaro.org> References: <1437249827-578-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Cc: Barry Spinney Subject: [lng-odp] [RFC API-NEXT PATCH 07/12] linux-generic: tm: add pkt_queue support routines X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Barry Spinney Signed-off-by: Mike Holmes Signed-off-by: Bill Fischofer --- platform/linux-generic/Makefile.am | 2 + .../linux-generic/include/odp_pkt_queue_internal.h | 62 ++++ platform/linux-generic/odp_pkt_queue.c | 376 +++++++++++++++++++++ 3 files changed, 440 insertions(+) create mode 100644 platform/linux-generic/include/odp_pkt_queue_internal.h create mode 100644 platform/linux-generic/odp_pkt_queue.c diff --git a/platform/linux-generic/Makefile.am b/platform/linux-generic/Makefile.am index f1815e7..2eb271a 100644 --- a/platform/linux-generic/Makefile.am +++ b/platform/linux-generic/Makefile.am @@ -131,6 +131,7 @@ noinst_HEADERS = \ ${srcdir}/include/odp_spin_internal.h \ ${srcdir}/include/odp_timer_internal.h \ ${srcdir}/include/odp_name_table_internal.h \ + ${srcdir}/include/odp_pkt_queue_internal.h \ ${srcdir}/Makefile.inc __LIB__libodp_la_SOURCES = \ @@ -143,6 +144,7 @@ __LIB__libodp_la_SOURCES = \ odp_event.c \ odp_init.c \ odp_name_table.c \ + odp_pkt_queue.c \ odp_impl.c \ odp_packet.c \ odp_packet_flags.c \ diff --git a/platform/linux-generic/include/odp_pkt_queue_internal.h b/platform/linux-generic/include/odp_pkt_queue_internal.h new file mode 100644 index 0000000..8138255 --- /dev/null +++ b/platform/linux-generic/include/odp_pkt_queue_internal.h @@ -0,0 +1,62 @@ +/* Copyright 2015 EZchip Semiconductor Ltd. All Rights Reserved. + * + * Copyright (c) 2015, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef ODP_INT_PKT_QUEUE_H_ +#define ODP_INT_PKT_QUEUE_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include + +typedef uint64_t odp_int_queue_pool_t; +typedef uint32_t odp_int_pkt_queue_t; + +#define ODP_INT_QUEUE_POOL_INVALID 0 +#define ODP_INT_PKT_QUEUE_INVALID 0 + +/* None of the functions in this file do any locking. Thats because the + * expected usage model is that each TM system will create its own + * odp_int_queue_pool, and then only call odp_int_pkt_queue_append and + * odp_int_pkt_queue_remove from a single thread associated/dedicated to this + * same TM system/odp_int_queue_pool. The main difficulty that this file + * tries to deal with is the possibility of a huge number of queues (e.g. 16 + * million), where each such queue could have a huge range in the number of + * pkts queued (say 0 to > 1,000) - yet the total "peak" number of pkts queued + * is many orders of magnitude smaller than the product of max_num_queues + * times max_queue_cnt. In particular, it is assumed that even at peak usage, + * only a small fraction of max_num_queues will be "active" - i.e. have any + * pkts queued, yet over time it is expected that most every queue will have + * some sort of backlog. + */ + +/* max_num_queues must be <= 16 * 1024 * 1024. */ +odp_int_queue_pool_t odp_queue_pool_create(uint32_t max_num_queues, + uint32_t max_queued_pkts); + +odp_int_pkt_queue_t odp_pkt_queue_create(odp_int_queue_pool_t queue_pool); + +int odp_pkt_queue_append(odp_int_queue_pool_t queue_pool, + odp_int_pkt_queue_t pkt_queue, + odp_packet_t pkt); + +int odp_pkt_queue_remove(odp_int_queue_pool_t queue_pool, + odp_int_pkt_queue_t pkt_queue, + odp_packet_t *pkt); + +void odp_pkt_queue_stats_print(odp_int_queue_pool_t queue_pool); + +void odp_queue_pool_destroy(odp_int_queue_pool_t queue_pool); + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/platform/linux-generic/odp_pkt_queue.c b/platform/linux-generic/odp_pkt_queue.c new file mode 100644 index 0000000..5a4e937 --- /dev/null +++ b/platform/linux-generic/odp_pkt_queue.c @@ -0,0 +1,376 @@ +/* Copyright 2015 EZchip Semiconductor Ltd. All Rights Reserved. + + * Copyright (c) 2015, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#include +#include +#include +#include +#include +#include +#include + +#define MAX(a, b) (((a) > (b)) ? (a) : (b)) +#define MIN(a, b) (((a) < (b)) ? (a) : (b)) + +typedef struct /* Must be exactly 64 bytes long AND cacheline aligned! */ { + uint32_t next_queue_blk_idx; + uint32_t tail_queue_blk_idx; + odp_packet_t pkts[7]; +} ODP_ALIGNED_CACHE queue_blk_t; + +typedef struct { + queue_blk_t blks[0]; +} ODP_ALIGNED_CACHE queue_blks_t; + +/* The queue_num_tbl is used to map from a queue_num to a queue_num_desc. + * The reason is based on the assumption that usually only a small fraction + * of the max_num_queues will have more than 1 pkt associated with it. This + * way the active queue_desc's can be dynamically allocated and freed according + * to the actual usage pattern. + */ +typedef struct { + uint32_t queue_num_to_blk_idx[0]; +} queue_num_tbl_t; + +typedef struct { + uint32_t num_blks; + uint32_t next_blk_idx; /* blk_idx of queue_blks not yet added. */ + queue_blks_t *queue_blks; +} queue_region_desc_t; + +typedef struct { + uint64_t total_pkt_appends; + uint64_t total_pkt_removes; + uint64_t total_bad_removes; + uint32_t free_list_size; + uint32_t min_free_list_size; + uint32_t peak_free_list_size; + uint32_t free_list_head_idx; + uint32_t max_queue_num; + uint32_t max_queued_pkts; + uint32_t next_queue_num; + queue_region_desc_t queue_region_descs[16]; + uint32_t *queue_num_tbl; + uint8_t current_region; + uint8_t all_regions_used; +} queue_pool_t; + +static queue_blk_t *blk_idx_to_queue_blk(queue_pool_t *queue_pool, + uint32_t queue_blk_idx) +{ + queue_region_desc_t *queue_region_desc; + uint32_t which_region, blk_tbl_idx; + + which_region = queue_blk_idx >> 28; + blk_tbl_idx = queue_blk_idx & ((1 << 28) - 1); + queue_region_desc = &queue_pool->queue_region_descs[which_region]; + return &queue_region_desc->queue_blks->blks[blk_tbl_idx]; +} + +static int pkt_queue_free_list_add(queue_pool_t *pool, + uint32_t num_queue_blks) +{ + queue_region_desc_t *region_desc; + queue_blks_t *queue_blks; + queue_blk_t *queue_blk; + uint32_t which_region, blks_added, num_blks, start_idx; + uint32_t malloc_len, blks_to_add, cnt; + + which_region = pool->current_region; + blks_added = 0; + while ((blks_added < num_queue_blks) && (pool->all_regions_used == 0)) { + region_desc = &pool->queue_region_descs[which_region]; + start_idx = region_desc->next_blk_idx; + num_blks = region_desc->num_blks; + queue_blks = region_desc->queue_blks; + if (!queue_blks) { + malloc_len = num_blks * sizeof(queue_blk_t); + queue_blks = malloc(malloc_len); + memset(queue_blks, 0, malloc_len); + region_desc->queue_blks = queue_blks; + } + + /* Now add as many queue_blks to the free list as... */ + blks_to_add = MIN(num_blks - start_idx, num_queue_blks); + queue_blk = &queue_blks->blks[start_idx]; + for (cnt = 1; cnt <= blks_to_add; cnt++) { + queue_blk->next_queue_blk_idx = start_idx + cnt; + queue_blk++; + } + + blks_added += blks_to_add; + pool->free_list_size += blks_to_add; + region_desc->next_blk_idx += blks_to_add; + if (blks_to_add == (num_blks - start_idx)) { + /* Advance to the next region */ + pool->current_region++; + if (16 <= pool->current_region) { + pool->all_regions_used = 1; + return blks_added; + } + + which_region = pool->current_region; + } + } + + return blks_added; +} + +static queue_blk_t *queue_blk_alloc(queue_pool_t *pool, + uint32_t *queue_blk_idx) +{ + queue_blk_t *head_queue_blk; + uint32_t head_queue_blk_idx; + int rc; + + if (pool->free_list_size <= 1) { + /* Replenish the queue_blk_t free list. */ + pool->min_free_list_size = pool->free_list_size; + rc = pkt_queue_free_list_add(pool, 64); + if (rc <= 0) + return NULL; + } + + head_queue_blk_idx = pool->free_list_head_idx; + head_queue_blk = blk_idx_to_queue_blk(pool, head_queue_blk_idx); + pool->free_list_size--; + pool->free_list_head_idx = head_queue_blk->next_queue_blk_idx; + *queue_blk_idx = head_queue_blk_idx; + if (pool->free_list_size < pool->min_free_list_size) + pool->min_free_list_size = pool->free_list_size; + + return head_queue_blk; +} + +static void queue_blk_free(queue_pool_t *pool, queue_blk_t *queue_blk, + uint32_t queue_blk_idx) +{ + if ((!queue_blk) || (queue_blk_idx == 0)) + return; + + queue_blk->next_queue_blk_idx = pool->free_list_head_idx; + pool->free_list_head_idx = queue_blk_idx; + pool->free_list_size++; + if (pool->peak_free_list_size < pool->free_list_size) + pool->peak_free_list_size = pool->free_list_size; +} + +static void queue_region_desc_init(queue_pool_t *pool, uint32_t which_region, + uint32_t num_blks) +{ + queue_region_desc_t *queue_region_desc; + + queue_region_desc = &pool->queue_region_descs[which_region]; + queue_region_desc->num_blks = num_blks; +} + +odp_int_queue_pool_t odp_queue_pool_create(uint32_t max_num_queues, + uint32_t max_queued_pkts) +{ + queue_pool_t *pool; + uint32_t idx, initial_free_list_size, malloc_len, first_queue_blk_idx; + int rc; + + pool = malloc(sizeof(queue_pool_t)); + memset(pool, 0, sizeof(queue_pool_t)); + + /* Initialize the queue_blk_tbl_sizes array based upon the + * max_queued_pkts. + */ + max_queued_pkts = MAX(max_queued_pkts, 64 * 1024); + queue_region_desc_init(pool, 0, max_queued_pkts / 4); + queue_region_desc_init(pool, 1, max_queued_pkts / 64); + queue_region_desc_init(pool, 2, max_queued_pkts / 64); + queue_region_desc_init(pool, 3, max_queued_pkts / 64); + queue_region_desc_init(pool, 4, max_queued_pkts / 64); + for (idx = 5; idx < 16; idx++) + queue_region_desc_init(pool, idx, max_queued_pkts / 16); + + /* Now allocate the first queue_blk_tbl and add its blks to the free + * list. Replenish the queue_blk_t free list. + */ + initial_free_list_size = MIN(64 * 1024, max_queued_pkts / 4); + rc = pkt_queue_free_list_add(pool, initial_free_list_size); + if (rc < 0) + return ODP_INT_QUEUE_POOL_INVALID; + + /* Discard the first queue blk with idx 0 */ + queue_blk_alloc(pool, &first_queue_blk_idx); + + pool->max_queue_num = max_num_queues; + pool->max_queued_pkts = max_queued_pkts; + pool->next_queue_num = 1; + + malloc_len = max_num_queues * sizeof(uint32_t); + pool->queue_num_tbl = malloc(malloc_len); + memset(pool->queue_num_tbl, 0, malloc_len); + + pool->min_free_list_size = pool->free_list_size; + pool->peak_free_list_size = pool->free_list_size; + return (odp_int_queue_pool_t)pool; +} + +odp_int_pkt_queue_t odp_pkt_queue_create(odp_int_queue_pool_t queue_pool) +{ + queue_pool_t *pool; + uint32_t queue_num; + + pool = (queue_pool_t *)queue_pool; + queue_num = pool->next_queue_num++; + if (pool->max_queue_num < queue_num) + return ODP_INT_PKT_QUEUE_INVALID; + + return (odp_int_pkt_queue_t)queue_num; +} + +int odp_pkt_queue_append(odp_int_queue_pool_t queue_pool, + odp_int_pkt_queue_t pkt_queue, odp_packet_t pkt) +{ + queue_pool_t *pool; + queue_blk_t *first_blk, *tail_blk, *new_tail_blk; + uint32_t queue_num, first_blk_idx, tail_blk_idx, new_tail_blk_idx; + uint32_t idx; + + pool = (queue_pool_t *)queue_pool; + queue_num = (uint32_t)pkt_queue; + if ((queue_num == 0) || (pool->max_queue_num < queue_num)) + return -2; + + if (pkt == ODP_PACKET_INVALID) + return -3; + + pool->total_pkt_appends++; + first_blk_idx = pool->queue_num_tbl[queue_num]; + if (first_blk_idx == 0) { + first_blk = queue_blk_alloc(pool, &first_blk_idx); + if (!first_blk) + return -1; + + pool->queue_num_tbl[queue_num] = first_blk_idx; + memset(first_blk, 0, sizeof(queue_blk_t)); + first_blk->pkts[0] = pkt; + return 0; + } + + first_blk = blk_idx_to_queue_blk(pool, first_blk_idx); + tail_blk_idx = first_blk->tail_queue_blk_idx; + if (tail_blk_idx == 0) + tail_blk = first_blk; + else + tail_blk = blk_idx_to_queue_blk(pool, tail_blk_idx); + + /* Find first empty slot and insert pkt there. */ + for (idx = 0; idx < 7; idx++) { + if (tail_blk->pkts[idx] == ODP_PACKET_INVALID) { + tail_blk->pkts[idx] = pkt; + return 0; + } + } + + /* If we reach here, the tai_blk was full, so we need to allocate a new + * one and link it in. + */ + new_tail_blk = queue_blk_alloc(pool, &new_tail_blk_idx); + if (!new_tail_blk) + return -1; + + memset(new_tail_blk, 0, sizeof(queue_blk_t)); + new_tail_blk->pkts[0] = pkt; + tail_blk->next_queue_blk_idx = new_tail_blk_idx; + first_blk->tail_queue_blk_idx = new_tail_blk_idx; + return 0; +} + +int odp_pkt_queue_remove(odp_int_queue_pool_t queue_pool, + odp_int_pkt_queue_t pkt_queue, odp_packet_t *pkt) +{ + queue_pool_t *pool; + queue_blk_t *first_blk, *second_blk; + uint32_t queue_num, first_blk_idx, next_blk_idx, idx; + + pool = (queue_pool_t *)queue_pool; + queue_num = (uint32_t)pkt_queue; + if ((queue_num == 0) || (pool->max_queue_num < queue_num)) + return -2; + + first_blk_idx = pool->queue_num_tbl[queue_num]; + if (first_blk_idx == 0) + return 0; /* pkt queue is empty. */ + + /* Now remove the first valid odp_packet_t handle value we find. */ + first_blk = blk_idx_to_queue_blk(pool, first_blk_idx); + for (idx = 0; idx < 7; idx++) { + if (first_blk->pkts[idx] != ODP_PACKET_INVALID) { + *pkt = first_blk->pkts[idx]; + first_blk->pkts[idx] = ODP_PACKET_INVALID; + + /* Now see if there are any more pkts in this queue. */ + if ((idx == 6) || + (first_blk->pkts[idx + 1] == ODP_PACKET_INVALID)) { + /* We have reached the end of this queue_blk. + * Check to see if there is a following block + * or not + */ + next_blk_idx = first_blk->next_queue_blk_idx; + if (next_blk_idx != 0) { + second_blk = + blk_idx_to_queue_blk + (pool, + next_blk_idx); + second_blk->tail_queue_blk_idx = + first_blk->tail_queue_blk_idx; + } + + pool->queue_num_tbl[queue_num] = next_blk_idx; + queue_blk_free(pool, first_blk, first_blk_idx); + } + + pool->total_pkt_removes++; + return 1; + } + } + + /* It is an error to not find at least one pkt in the first_blk! */ + pool->total_bad_removes++; + return -1; +} + +void odp_pkt_queue_stats_print(odp_int_queue_pool_t queue_pool) +{ + queue_pool_t *pool; + + pool = (queue_pool_t *)queue_pool; + ODP_DBG("pkt_queue_stats - queue_pool=0x%lX\n", queue_pool); + ODP_DBG(" max_queue_num=%u max_queued_pkts=%u next_queue_num=%u\n", + pool->max_queue_num, pool->max_queued_pkts, + pool->next_queue_num); + ODP_DBG(" total pkt appends=%lu total pkt removes=%lu " + "bad removes=%lu\n", + pool->total_pkt_appends, pool->total_pkt_removes, + pool->total_bad_removes); + ODP_DBG(" free_list size=%u min size=%u peak size=%u\n", + pool->free_list_size, pool->min_free_list_size, + pool->peak_free_list_size); +} + +void odp_queue_pool_destroy(odp_int_queue_pool_t queue_pool) +{ + queue_region_desc_t *queue_region_desc; + queue_pool_t *pool; + uint32_t idx; + + pool = (queue_pool_t *)queue_pool; + for (idx = 0; idx < 16; idx++) { + queue_region_desc = &pool->queue_region_descs[idx]; + if (queue_region_desc->queue_blks) + free(queue_region_desc->queue_blks); + } + + free(pool->queue_num_tbl); + free(pool); +}