From patchwork Thu May 28 11:41:41 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 49060 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f69.google.com (mail-la0-f69.google.com [209.85.215.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id DCFAA218EC for ; Thu, 28 May 2015 11:42:03 +0000 (UTC) Received: by lagv1 with SMTP id v1sf10325279lag.1 for ; Thu, 28 May 2015 04:42:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:subject:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=/gSm9xByv2Jyy3wKjtF5gxIY9PxYTNjYAEuZNeOlkDI=; b=LurIjQCtOI9lQiHyH5awKQF2K3t78U00ayT+uB9IbHWzPs5KGRe5B9QNk6pwN9s2KL NliFTHsk7fVPZu2wz/mzEIClHsS5L1dTLnOzDx1Cp4UnSFEmZSZ434RnC/AFBBvwV4Da C3XStRlXq5CJtC/3Gaa/VmuvTXbuk8hWYKB/VktI3+qBS66/x5gU4sc3POVaz0Tgi7Ey ENOROJkqqFA5A2xKDC2L9tst5XlALEdVmpwMfIsE8dtaP8AZizRUJVqf8xI5V1gubBg+ jZ0L4kEYJz+KjqnSF6MNvpjBNMpsAy/vGxHcVrLitrUIK76TQaMW1Zbeh1n5wBMoP6rx bArQ== X-Gm-Message-State: ALoCoQlYwSPghyabf7OwMe2OdLFopKZcPg5rt02qTVvMI/E/HR4heGFgQM3XsapG3YtsgckrIZwi X-Received: by 10.180.106.10 with SMTP id gq10mr32903885wib.0.1432813322726; Thu, 28 May 2015 04:42:02 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.28.4 with SMTP id x4ls182168lag.40.gmail; Thu, 28 May 2015 04:42:02 -0700 (PDT) X-Received: by 10.152.8.102 with SMTP id q6mr2276627laa.27.1432813322541; Thu, 28 May 2015 04:42:02 -0700 (PDT) Received: from mail-lb0-f177.google.com (mail-lb0-f177.google.com. [209.85.217.177]) by mx.google.com with ESMTPS id ot8si1702204lbb.177.2015.05.28.04.42.02 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 May 2015 04:42:02 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.177 as permitted sender) client-ip=209.85.217.177; Received: by lbbuc2 with SMTP id uc2so26250884lbb.2 for ; Thu, 28 May 2015 04:42:02 -0700 (PDT) X-Received: by 10.152.7.7 with SMTP id f7mr2357353laa.106.1432813322101; Thu, 28 May 2015 04:42:02 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp1085488lbb; Thu, 28 May 2015 04:42:00 -0700 (PDT) X-Received: by 10.55.25.166 with SMTP id 38mr4163758qkz.52.1432813319424; Thu, 28 May 2015 04:41:59 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id g9si2089606qgf.41.2015.05.28.04.41.57; Thu, 28 May 2015 04:41:59 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id AC32061E8C; Thu, 28 May 2015 11:41:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from ip-10-142-244-252.ec2.internal (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 6689761B6F; Thu, 28 May 2015 11:41:55 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 1682A61CEA; Thu, 28 May 2015 11:41:48 +0000 (UTC) Received: from mail-ob0-f171.google.com (mail-ob0-f171.google.com [209.85.214.171]) by lists.linaro.org (Postfix) with ESMTPS id D332A61E75 for ; Thu, 28 May 2015 11:41:45 +0000 (UTC) Received: by obcnx10 with SMTP id nx10so25101608obc.2 for ; Thu, 28 May 2015 04:41:45 -0700 (PDT) X-Received: by 10.60.130.194 with SMTP id og2mr2093680oeb.6.1432813305247; Thu, 28 May 2015 04:41:45 -0700 (PDT) Received: from localhost.localdomain (cpe-24-28-70-239.austin.res.rr.com. [24.28.70.239]) by mx.google.com with ESMTPSA id t20sm1067439oie.22.2015.05.28.04.41.44 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 28 May 2015 04:41:44 -0700 (PDT) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Thu, 28 May 2015 06:41:41 -0500 Message-Id: <1432813301-21973-1-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.0 X-Topics: patch Subject: [lng-odp] [PATCH] linux-generic: pool: group and document pool statistics X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.177 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Address bug https://bugs.linaro.org/show_bug.cgi?id=1480 Signed-off-by: Bill Fischofer --- platform/linux-generic/include/odp_pool_internal.h | 44 +++++++++++-------- platform/linux-generic/odp_pool.c | 50 +++++++++++----------- 2 files changed, 52 insertions(+), 42 deletions(-) diff --git a/platform/linux-generic/include/odp_pool_internal.h b/platform/linux-generic/include/odp_pool_internal.h index 247a75a..136db2c 100644 --- a/platform/linux-generic/include/odp_pool_internal.h +++ b/platform/linux-generic/include/odp_pool_internal.h @@ -73,6 +73,20 @@ typedef struct local_cache_t { #define POOL_LOCK_INIT(a) odp_spinlock_init(a) #endif +/** + * ODP Pool stats - Maintain some useful stats regarding pool utilization + */ +typedef struct { + odp_atomic_u64_t bufallocs; /**< Count of successful buf allocs */ + odp_atomic_u64_t buffrees; /**< Count of successful buf frees */ + odp_atomic_u64_t blkallocs; /**< Count of successful blk allocs */ + odp_atomic_u64_t blkfrees; /**< Count of successful blk frees */ + odp_atomic_u64_t bufempty; /**< Count of unsuccessful buf allocs */ + odp_atomic_u64_t blkempty; /**< Count of unsuccessful blk allocs */ + odp_atomic_u64_t high_wm_count; /**< Count of high wm conditions */ + odp_atomic_u64_t low_wm_count; /**< Count of low wm conditions */ +} _odp_pool_stats_t; + struct pool_entry_s { #ifdef POOL_USE_TICKETLOCK odp_ticketlock_t lock ODP_ALIGNED_CACHE; @@ -111,14 +125,7 @@ struct pool_entry_s { void *blk_freelist; odp_atomic_u32_t bufcount; odp_atomic_u32_t blkcount; - odp_atomic_u64_t bufallocs; - odp_atomic_u64_t buffrees; - odp_atomic_u64_t blkallocs; - odp_atomic_u64_t blkfrees; - odp_atomic_u64_t bufempty; - odp_atomic_u64_t blkempty; - odp_atomic_u64_t high_wm_count; - odp_atomic_u64_t low_wm_count; + _odp_pool_stats_t poolstats; uint32_t buf_num; uint32_t seg_size; uint32_t blk_size; @@ -153,12 +160,12 @@ static inline void *get_blk(struct pool_entry_s *pool) if (odp_unlikely(myhead == NULL)) { POOL_UNLOCK(&pool->blk_lock); - odp_atomic_inc_u64(&pool->blkempty); + odp_atomic_inc_u64(&pool->poolstats.blkempty); } else { pool->blk_freelist = ((odp_buf_blk_t *)myhead)->next; POOL_UNLOCK(&pool->blk_lock); odp_atomic_dec_u32(&pool->blkcount); - odp_atomic_inc_u64(&pool->blkallocs); + odp_atomic_inc_u64(&pool->poolstats.blkallocs); } return myhead; @@ -174,7 +181,7 @@ static inline void ret_blk(struct pool_entry_s *pool, void *block) POOL_UNLOCK(&pool->blk_lock); odp_atomic_inc_u32(&pool->blkcount); - odp_atomic_inc_u64(&pool->blkfrees); + odp_atomic_inc_u64(&pool->poolstats.blkfrees); } static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool) @@ -186,7 +193,7 @@ static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool) if (odp_unlikely(myhead == NULL)) { POOL_UNLOCK(&pool->buf_lock); - odp_atomic_inc_u64(&pool->bufempty); + odp_atomic_inc_u64(&pool->poolstats.bufempty); } else { pool->buf_freelist = myhead->next; POOL_UNLOCK(&pool->buf_lock); @@ -196,10 +203,10 @@ static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool) /* Check for low watermark condition */ if (bufcount == pool->low_wm && !pool->low_wm_assert) { pool->low_wm_assert = 1; - odp_atomic_inc_u64(&pool->low_wm_count); + odp_atomic_inc_u64(&pool->poolstats.low_wm_count); } - odp_atomic_inc_u64(&pool->bufallocs); + odp_atomic_inc_u64(&pool->poolstats.bufallocs); myhead->allocator = odp_thread_id(); } @@ -229,10 +236,10 @@ static inline void ret_buf(struct pool_entry_s *pool, odp_buffer_hdr_t *buf) /* Check if low watermark condition should be deasserted */ if (bufcount == pool->high_wm && pool->low_wm_assert) { pool->low_wm_assert = 0; - odp_atomic_inc_u64(&pool->high_wm_count); + odp_atomic_inc_u64(&pool->poolstats.high_wm_count); } - odp_atomic_inc_u64(&pool->buffrees); + odp_atomic_inc_u64(&pool->poolstats.buffrees); } static inline void *get_local_buf(local_cache_t *buf_cache, @@ -291,8 +298,9 @@ static inline void flush_cache(local_cache_t *buf_cache, flush_count++; } - odp_atomic_add_u64(&pool->bufallocs, buf_cache->bufallocs); - odp_atomic_add_u64(&pool->buffrees, buf_cache->buffrees - flush_count); + odp_atomic_add_u64(&pool->poolstats.bufallocs, buf_cache->bufallocs); + odp_atomic_add_u64(&pool->poolstats.buffrees, + buf_cache->buffrees - flush_count); buf_cache->buf_freelist = NULL; buf_cache->bufallocs = 0; diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index cd2c449..f2bf0c7 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -89,14 +89,14 @@ int odp_pool_init_global(void) odp_atomic_init_u32(&pool->s.blkcount, 0); /* Initialize pool statistics counters */ - odp_atomic_init_u64(&pool->s.bufallocs, 0); - odp_atomic_init_u64(&pool->s.buffrees, 0); - odp_atomic_init_u64(&pool->s.blkallocs, 0); - odp_atomic_init_u64(&pool->s.blkfrees, 0); - odp_atomic_init_u64(&pool->s.bufempty, 0); - odp_atomic_init_u64(&pool->s.blkempty, 0); - odp_atomic_init_u64(&pool->s.high_wm_count, 0); - odp_atomic_init_u64(&pool->s.low_wm_count, 0); + odp_atomic_init_u64(&pool->s.poolstats.bufallocs, 0); + odp_atomic_init_u64(&pool->s.poolstats.buffrees, 0); + odp_atomic_init_u64(&pool->s.poolstats.blkallocs, 0); + odp_atomic_init_u64(&pool->s.poolstats.blkfrees, 0); + odp_atomic_init_u64(&pool->s.poolstats.bufempty, 0); + odp_atomic_init_u64(&pool->s.poolstats.blkempty, 0); + odp_atomic_init_u64(&pool->s.poolstats.high_wm_count, 0); + odp_atomic_init_u64(&pool->s.poolstats.low_wm_count, 0); } ODP_DBG("\nPool init global\n"); @@ -401,14 +401,14 @@ odp_pool_t odp_pool_create(const char *name, } while (blk >= block_base_addr); /* Initialize pool statistics counters */ - odp_atomic_store_u64(&pool->s.bufallocs, 0); - odp_atomic_store_u64(&pool->s.buffrees, 0); - odp_atomic_store_u64(&pool->s.blkallocs, 0); - odp_atomic_store_u64(&pool->s.blkfrees, 0); - odp_atomic_store_u64(&pool->s.bufempty, 0); - odp_atomic_store_u64(&pool->s.blkempty, 0); - odp_atomic_store_u64(&pool->s.high_wm_count, 0); - odp_atomic_store_u64(&pool->s.low_wm_count, 0); + odp_atomic_store_u64(&pool->s.poolstats.bufallocs, 0); + odp_atomic_store_u64(&pool->s.poolstats.buffrees, 0); + odp_atomic_store_u64(&pool->s.poolstats.blkallocs, 0); + odp_atomic_store_u64(&pool->s.poolstats.blkfrees, 0); + odp_atomic_store_u64(&pool->s.poolstats.bufempty, 0); + odp_atomic_store_u64(&pool->s.poolstats.blkempty, 0); + odp_atomic_store_u64(&pool->s.poolstats.high_wm_count, 0); + odp_atomic_store_u64(&pool->s.poolstats.low_wm_count, 0); /* Reset other pool globals to initial state */ pool->s.low_wm_assert = 0; @@ -586,14 +586,16 @@ void odp_pool_print(odp_pool_t pool_hdl) uint32_t bufcount = odp_atomic_load_u32(&pool->s.bufcount); uint32_t blkcount = odp_atomic_load_u32(&pool->s.blkcount); - uint64_t bufallocs = odp_atomic_load_u64(&pool->s.bufallocs); - uint64_t buffrees = odp_atomic_load_u64(&pool->s.buffrees); - uint64_t blkallocs = odp_atomic_load_u64(&pool->s.blkallocs); - uint64_t blkfrees = odp_atomic_load_u64(&pool->s.blkfrees); - uint64_t bufempty = odp_atomic_load_u64(&pool->s.bufempty); - uint64_t blkempty = odp_atomic_load_u64(&pool->s.blkempty); - uint64_t hiwmct = odp_atomic_load_u64(&pool->s.high_wm_count); - uint64_t lowmct = odp_atomic_load_u64(&pool->s.low_wm_count); + uint64_t bufallocs = odp_atomic_load_u64(&pool->s.poolstats.bufallocs); + uint64_t buffrees = odp_atomic_load_u64(&pool->s.poolstats.buffrees); + uint64_t blkallocs = odp_atomic_load_u64(&pool->s.poolstats.blkallocs); + uint64_t blkfrees = odp_atomic_load_u64(&pool->s.poolstats.blkfrees); + uint64_t bufempty = odp_atomic_load_u64(&pool->s.poolstats.bufempty); + uint64_t blkempty = odp_atomic_load_u64(&pool->s.poolstats.blkempty); + uint64_t hiwmct = + odp_atomic_load_u64(&pool->s.poolstats.high_wm_count); + uint64_t lowmct = + odp_atomic_load_u64(&pool->s.poolstats.low_wm_count); ODP_DBG("Pool info\n"); ODP_DBG("---------\n");