From patchwork Thu Oct 22 21:18:37 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 55458 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f199.google.com (mail-wi0-f199.google.com [209.85.212.199]) by patches.linaro.org (Postfix) with ESMTPS id C859F20581 for ; Thu, 22 Oct 2015 21:19:30 +0000 (UTC) Received: by wiyb4 with SMTP id b4sf2095564wiy.2 for ; Thu, 22 Oct 2015 14:19:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:subject:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=gW5DePu1vM86DAiHK6ToFQ7XNVRtGFCO3CraXZz/cps=; b=K+CJgzRsnOGhZ33+G8Bo1xAJ2KyZ+o4Jw7vRKnfo6/HL9ByhZfmgBK96xo1KoIwPjN pGerR75LcTbvSwESa2Rc8esCTsg7Qa+g2pd6KEpPbrvdWU3wrWIzbqBIHyACpWr7IUU8 X1FTmcPLtJNfZG2GJEUZC5N2Eh6s8eZ2GL44mQlMkqfzyxJ2o1EQZXKMujGwUGtmoYQB JKDcVcS8qRbePneCcWe1g93GQRWizMlG+8kM6bfLSJrT0+eD7WJqG5FmIzP6/eQIJCp7 +pdvlg1A3CVUWv43k51aVvhAsk2JCe4v2u4rKxcdfTD7tDmDdDnn1P2GbsnDr5+7/Y8F RhqA== X-Gm-Message-State: ALoCoQm6hNUkEuId5fftsa7sdg3dWywpuGLzMHad78E55IEiOWXEkdSAjqHeNNNckzUrfU19Iq9w X-Received: by 10.112.63.165 with SMTP id h5mr4186846lbs.18.1445548769189; Thu, 22 Oct 2015 14:19:29 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.15.106 with SMTP id e103ls340315lfi.14.gmail; Thu, 22 Oct 2015 14:19:29 -0700 (PDT) X-Received: by 10.112.14.9 with SMTP id l9mr9628943lbc.91.1445548769055; Thu, 22 Oct 2015 14:19:29 -0700 (PDT) Received: from mail-lf0-f49.google.com (mail-lf0-f49.google.com. [209.85.215.49]) by mx.google.com with ESMTPS id ub8si10770196lbb.49.2015.10.22.14.19.29 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Oct 2015 14:19:29 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) client-ip=209.85.215.49; Received: by lfaz124 with SMTP id z124so63071488lfa.1 for ; Thu, 22 Oct 2015 14:19:29 -0700 (PDT) X-Received: by 10.112.199.137 with SMTP id jk9mr9575319lbc.86.1445548768929; Thu, 22 Oct 2015 14:19:28 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp835673lbq; Thu, 22 Oct 2015 14:19:28 -0700 (PDT) X-Received: by 10.55.16.71 with SMTP id a68mr21743992qkh.95.1445548768046; Thu, 22 Oct 2015 14:19:28 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id c8si15280519qkj.128.2015.10.22.14.19.27; Thu, 22 Oct 2015 14:19:28 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id 0822162C3E; Thu, 22 Oct 2015 21:19:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id E54CC62C2C; Thu, 22 Oct 2015 21:18:58 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 5AB1362C39; Thu, 22 Oct 2015 21:18:48 +0000 (UTC) Received: from mail-oi0-f44.google.com (mail-oi0-f44.google.com [209.85.218.44]) by lists.linaro.org (Postfix) with ESMTPS id 690D261940 for ; Thu, 22 Oct 2015 21:18:42 +0000 (UTC) Received: by oiad129 with SMTP id d129so55361903oia.0 for ; Thu, 22 Oct 2015 14:18:42 -0700 (PDT) X-Received: by 10.202.50.215 with SMTP id y206mr12120669oiy.62.1445548721797; Thu, 22 Oct 2015 14:18:41 -0700 (PDT) Received: from Ubuntu15.localdomain (cpe-66-68-129-43.austin.res.rr.com. [66.68.129.43]) by smtp.gmail.com with ESMTPSA id q62sm6702019oif.28.2015.10.22.14.18.40 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 22 Oct 2015 14:18:41 -0700 (PDT) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Thu, 22 Oct 2015 16:18:37 -0500 Message-Id: <1445548717-9210-1-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 X-Topics: patch Subject: [lng-odp] [PATCH] linux-generic: pool: move local caches to pool X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Resolve Bug https://bugs.linaro.org/show_bug.cgi?id=1851 by moving local buffer caches to the pool itself. This enables odp_pool_destroy() to properly flush all local caches as part of its processing. Signed-off-by: Bill Fischofer Reviewed-by: Petri Savolainen --- platform/linux-generic/include/odp_internal.h | 1 + platform/linux-generic/include/odp_pool_internal.h | 13 ++++++++--- platform/linux-generic/odp_init.c | 5 +++++ platform/linux-generic/odp_pool.c | 25 +++++++++++++++------- 4 files changed, 33 insertions(+), 11 deletions(-) diff --git a/platform/linux-generic/include/odp_internal.h b/platform/linux-generic/include/odp_internal.h index 6f0050f..010b82f 100644 --- a/platform/linux-generic/include/odp_internal.h +++ b/platform/linux-generic/include/odp_internal.h @@ -52,6 +52,7 @@ int odp_shm_term_global(void); int odp_shm_init_local(void); int odp_pool_init_global(void); +int odp_pool_init_local(void); int odp_pool_term_global(void); int odp_pool_term_local(void); diff --git a/platform/linux-generic/include/odp_pool_internal.h b/platform/linux-generic/include/odp_pool_internal.h index 136db2c..bb70159 100644 --- a/platform/linux-generic/include/odp_pool_internal.h +++ b/platform/linux-generic/include/odp_pool_internal.h @@ -53,9 +53,14 @@ typedef struct _odp_buffer_pool_init_t { /* Local cache for buffer alloc/free acceleration */ typedef struct local_cache_t { - odp_buffer_hdr_t *buf_freelist; /* The local cache */ - uint64_t bufallocs; /* Local buffer alloc count */ - uint64_t buffrees; /* Local buffer free count */ + union { + struct { + odp_buffer_hdr_t *buf_freelist; /* The local cache */ + uint64_t bufallocs; /* Local buffer alloc count */ + uint64_t buffrees; /* Local buffer free count */ + }; + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(uint64_t))]; + }; } local_cache_t; /* Use ticketlock instead of spinlock */ @@ -133,6 +138,8 @@ struct pool_entry_s { uint32_t low_wm; uint32_t headroom; uint32_t tailroom; + + local_cache_t local_cache[ODP_CONFIG_MAX_THREADS] ODP_ALIGNED_CACHE; }; typedef union pool_entry_u { diff --git a/platform/linux-generic/odp_init.c b/platform/linux-generic/odp_init.c index 48d9b20..5e19d86 100644 --- a/platform/linux-generic/odp_init.c +++ b/platform/linux-generic/odp_init.c @@ -138,6 +138,11 @@ int odp_init_local(odp_thread_type_t thr_type) return -1; } + if (odp_pool_init_local()) { + ODP_ERR("ODP pool local init failed.\n"); + return -1; + } + if (odp_schedule_init_local()) { ODP_ERR("ODP schedule local init failed.\n"); return -1; diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index 30d4b2b..d06a9d4 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -57,8 +57,8 @@ static const char SHM_DEFAULT_NAME[] = "odp_buffer_pools"; /* Pool entry pointers (for inlining) */ void *pool_entry_ptr[ODP_CONFIG_POOLS]; -/* Local cache for buffer alloc/free acceleration */ -static __thread local_cache_t local_cache[ODP_CONFIG_POOLS]; +/* Cache thread id locally for local cache performance */ +static __thread int local_id; int odp_pool_init_global(void) { @@ -107,6 +107,12 @@ int odp_pool_init_global(void) return 0; } +int odp_pool_init_local(void) +{ + local_id = odp_thread_id(); + return 0; +} + int odp_pool_term_global(void) { int i; @@ -442,6 +448,7 @@ int odp_pool_destroy(odp_pool_t pool_hdl) { uint32_t pool_id = pool_handle_to_index(pool_hdl); pool_entry_t *pool = get_pool_entry(pool_id); + int i; if (pool == NULL) return -1; @@ -455,8 +462,9 @@ int odp_pool_destroy(odp_pool_t pool_hdl) return -1; } - /* Make sure local cache is empty */ - flush_cache(&local_cache[pool_id], &pool->s); + /* Make sure local caches are empty */ + for (i = 0; i < ODP_CONFIG_MAX_THREADS; i++) + flush_cache(&pool->s.local_cache[i], &pool->s); /* Call fails if pool has allocated buffers */ if (odp_atomic_load_u32(&pool->s.bufcount) < pool->s.buf_num) { @@ -485,8 +493,9 @@ odp_buffer_t buffer_alloc(odp_pool_t pool_hdl, size_t size) return ODP_BUFFER_INVALID; /* Try to satisfy request from the local cache */ - buf = (odp_anybuf_t *)(void *)get_local_buf(&local_cache[pool_id], - &pool->s, totsize); + buf = (odp_anybuf_t *) + (void *)get_local_buf(&pool->s.local_cache[local_id], + &pool->s, totsize); /* If cache is empty, satisfy request from the pool */ if (odp_unlikely(buf == NULL)) { @@ -537,7 +546,7 @@ void odp_buffer_free(odp_buffer_t buf) if (odp_unlikely(pool->s.low_wm_assert)) ret_buf(&pool->s, buf_hdr); else - ret_local_buf(&local_cache[pool->s.pool_id], buf_hdr); + ret_local_buf(&pool->s.local_cache[local_id], buf_hdr); } void _odp_flush_caches(void) @@ -546,7 +555,7 @@ void _odp_flush_caches(void) for (i = 0; i < ODP_CONFIG_POOLS; i++) { pool_entry_t *pool = get_pool_entry(i); - flush_cache(&local_cache[i], &pool->s); + flush_cache(&pool->s.local_cache[local_id], &pool->s); } }