From patchwork Wed Feb 25 15:40:27 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 45082 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f71.google.com (mail-wg0-f71.google.com [74.125.82.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C61632142A for ; Wed, 25 Feb 2015 15:40:50 +0000 (UTC) Received: by wghk14 with SMTP id k14sf3486166wgh.0 for ; Wed, 25 Feb 2015 07:40:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:subject :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=5qMtbKWQJv0NsaZINqq3O9ZGi3gyHRRvT/tWGkYQgN4=; b=Q0FVvNWNaQ1s6s10S41NgiR6emm68ndRTKWMcsL8p3GVyQ2Crax7w3+MrGZPqsc98N OsojMqKTy2752c42/IFO75DzqDy/ZutAvZcdxbjpLQG13rDVcbuOZ56xwwDw8fkYQiPJ LNhKrGPpmgblEt5N6UFsGjjkhQR7zvmxPooGTRCaoer+jEMOPILEj02i7qPpGlHgrEcu GloLJIHnBiwwwAGx/hV+9eBmzRzU//EIxqlhvUreN7tn/X34ppW+BAcQAi9elFJSNhpg wQ1zGaB8vwgg7IxxIibpIuxa38uQ+Exz3ydkOIuZEPVzMOtt7EwNnGWUHiCp9uPZcRqL aN4g== X-Gm-Message-State: ALoCoQlDWszjdBaXGS7ewIy46sWHpzsBaRXjtqnc0U6w07VmsxYr8Z6XWP/jNRHEihIORvYL467o X-Received: by 10.152.45.7 with SMTP id i7mr581921lam.9.1424878850048; Wed, 25 Feb 2015 07:40:50 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.153.7.172 with SMTP id dd12ls751195lad.20.gmail; Wed, 25 Feb 2015 07:40:49 -0800 (PST) X-Received: by 10.112.53.137 with SMTP id b9mr3379927lbp.66.1424878849817; Wed, 25 Feb 2015 07:40:49 -0800 (PST) Received: from mail-lb0-f175.google.com (mail-lb0-f175.google.com. [209.85.217.175]) by mx.google.com with ESMTPS id yl7si29654855lab.116.2015.02.25.07.40.49 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Feb 2015 07:40:49 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) client-ip=209.85.217.175; Received: by lbvn10 with SMTP id n10so4668129lbv.6 for ; Wed, 25 Feb 2015 07:40:49 -0800 (PST) X-Received: by 10.152.88.4 with SMTP id bc4mr3356573lab.86.1424878849500; Wed, 25 Feb 2015 07:40:49 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp2697651lbj; Wed, 25 Feb 2015 07:40:48 -0800 (PST) X-Received: by 10.140.150.21 with SMTP id 21mr8194124qhw.69.1424878848069; Wed, 25 Feb 2015 07:40:48 -0800 (PST) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id b90si42710282qgb.50.2015.02.25.07.40.46 (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 25 Feb 2015 07:40:48 -0800 (PST) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YQe4q-0008Gx-Ki; Wed, 25 Feb 2015 15:40:44 +0000 Received: from mail-ob0-f182.google.com ([209.85.214.182]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YQe4k-0008Gs-FX for lng-odp@lists.linaro.org; Wed, 25 Feb 2015 15:40:38 +0000 Received: by mail-ob0-f182.google.com with SMTP id nt9so4445670obb.13 for ; Wed, 25 Feb 2015 07:40:33 -0800 (PST) X-Received: by 10.60.150.202 with SMTP id uk10mr2667924oeb.14.1424878833120; Wed, 25 Feb 2015 07:40:33 -0800 (PST) Received: from localhost.localdomain (cpe-24-28-70-239.austin.res.rr.com. [24.28.70.239]) by mx.google.com with ESMTPSA id qx10sm21591564oec.5.2015.02.25.07.40.32 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 25 Feb 2015 07:40:32 -0800 (PST) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Wed, 25 Feb 2015 09:40:27 -0600 Message-Id: <1424878827-31541-1-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.0 X-Topics: patch Subject: [lng-odp] [PATCHv3] linux-generic: pools: switch to simple locks for buf/blk synchronization X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Resolve ABA issue with a simple use of locks. The performance hit is negligible due to the existing use of local buffer caching. Signed-off-by: Bill Fischofer --- platform/linux-generic/include/odp_pool_internal.h | 95 +++++++--------------- platform/linux-generic/odp_pool.c | 8 +- 2 files changed, 35 insertions(+), 68 deletions(-) diff --git a/platform/linux-generic/include/odp_pool_internal.h b/platform/linux-generic/include/odp_pool_internal.h index 1b7906f..feeb284 100644 --- a/platform/linux-generic/include/odp_pool_internal.h +++ b/platform/linux-generic/include/odp_pool_internal.h @@ -76,8 +76,12 @@ typedef struct local_cache_t { struct pool_entry_s { #ifdef POOL_USE_TICKETLOCK odp_ticketlock_t lock ODP_ALIGNED_CACHE; + odp_ticketlock_t buf_lock; + odp_ticketlock_t blk_lock; #else odp_spinlock_t lock ODP_ALIGNED_CACHE; + odp_spinlock_t buf_lock; + odp_spinlock_t blk_lock; #endif char name[ODP_POOL_NAME_LEN]; @@ -103,8 +107,8 @@ struct pool_entry_s { size_t pool_size; uint32_t buf_align; uint32_t buf_stride; - _odp_atomic_ptr_t buf_freelist; - _odp_atomic_ptr_t blk_freelist; + odp_buffer_hdr_t *buf_freelist; + void *blk_freelist; odp_atomic_u32_t bufcount; odp_atomic_u32_t blkcount; odp_atomic_u64_t bufallocs; @@ -140,58 +144,33 @@ extern void *pool_entry_ptr[]; #define pool_is_secure(pool) 0 #endif -#define TAG_ALIGN ((size_t)16) - -#define odp_cs(ptr, old, new) \ - _odp_atomic_ptr_cmp_xchg_strong(&ptr, (void **)&old, (void *)new, \ - _ODP_MEMMODEL_SC, \ - _ODP_MEMMODEL_SC) - -/* Helper functions for pointer tagging to avoid ABA race conditions */ -#define odp_tag(ptr) \ - (((size_t)ptr) & (TAG_ALIGN - 1)) - -#define odp_detag(ptr) \ - ((void *)(((size_t)ptr) & -TAG_ALIGN)) - -#define odp_retag(ptr, tag) \ - ((void *)(((size_t)ptr) | odp_tag(tag))) - - static inline void *get_blk(struct pool_entry_s *pool) { - void *oldhead, *myhead, *newhead; - - oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, _ODP_MEMMODEL_ACQ); + void *myhead; + POOL_LOCK(&pool->blk_lock); - do { - size_t tag = odp_tag(oldhead); - myhead = odp_detag(oldhead); - if (odp_unlikely(myhead == NULL)) - break; - newhead = odp_retag(((odp_buf_blk_t *)myhead)->next, tag + 1); - } while (odp_cs(pool->blk_freelist, oldhead, newhead) == 0); + myhead = pool->blk_freelist; - if (odp_unlikely(myhead == NULL)) + if (odp_unlikely(myhead == NULL)) { + POOL_UNLOCK(&pool->blk_lock); odp_atomic_inc_u64(&pool->blkempty); - else + } else { + pool->blk_freelist = ((odp_buf_blk_t *)myhead)->next; + POOL_UNLOCK(&pool->blk_lock); odp_atomic_dec_u32(&pool->blkcount); + } - return (void *)myhead; + return myhead; } static inline void ret_blk(struct pool_entry_s *pool, void *block) { - void *oldhead, *myhead, *myblock; + POOL_LOCK(&pool->blk_lock); - oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, _ODP_MEMMODEL_ACQ); + ((odp_buf_blk_t *)block)->next = pool->blk_freelist; + pool->blk_freelist = block; - do { - size_t tag = odp_tag(oldhead); - myhead = odp_detag(oldhead); - ((odp_buf_blk_t *)block)->next = myhead; - myblock = odp_retag(block, tag + 1); - } while (odp_cs(pool->blk_freelist, oldhead, myblock) == 0); + POOL_UNLOCK(&pool->blk_lock); odp_atomic_inc_u32(&pool->blkcount); odp_atomic_inc_u64(&pool->blkfrees); @@ -199,21 +178,17 @@ static inline void ret_blk(struct pool_entry_s *pool, void *block) static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool) { - odp_buffer_hdr_t *oldhead, *myhead, *newhead; + odp_buffer_hdr_t *myhead; + POOL_LOCK(&pool->buf_lock); - oldhead = _odp_atomic_ptr_load(&pool->buf_freelist, _ODP_MEMMODEL_ACQ); - - do { - size_t tag = odp_tag(oldhead); - myhead = odp_detag(oldhead); - if (odp_unlikely(myhead == NULL)) - break; - newhead = odp_retag(myhead->next, tag + 1); - } while (odp_cs(pool->buf_freelist, oldhead, newhead) == 0); + myhead = pool->buf_freelist; if (odp_unlikely(myhead == NULL)) { + POOL_UNLOCK(&pool->buf_lock); odp_atomic_inc_u64(&pool->bufempty); } else { + pool->buf_freelist = myhead->next; + POOL_UNLOCK(&pool->buf_lock); uint64_t bufcount = odp_atomic_fetch_sub_u32(&pool->bufcount, 1) - 1; @@ -224,7 +199,6 @@ static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool) } odp_atomic_inc_u64(&pool->bufallocs); - myhead->next = myhead; /* Mark buffer allocated */ myhead->allocator = odp_thread_id(); } @@ -233,10 +207,6 @@ static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool) static inline void ret_buf(struct pool_entry_s *pool, odp_buffer_hdr_t *buf) { - odp_buffer_hdr_t *oldhead, *myhead, *mybuf; - - buf->allocator = ODP_FREEBUF; /* Mark buffer free */ - if (!buf->flags.hdrdata && buf->type != ODP_EVENT_BUFFER) { while (buf->segcount > 0) { if (buffer_is_secure(buf) || pool_is_secure(pool)) @@ -247,14 +217,11 @@ static inline void ret_buf(struct pool_entry_s *pool, odp_buffer_hdr_t *buf) buf->size = 0; } - oldhead = _odp_atomic_ptr_load(&pool->buf_freelist, _ODP_MEMMODEL_ACQ); - - do { - size_t tag = odp_tag(oldhead); - myhead = odp_detag(oldhead); - buf->next = myhead; - mybuf = odp_retag(buf, tag + 1); - } while (odp_cs(pool->buf_freelist, oldhead, mybuf) == 0); + buf->allocator = ODP_FREEBUF; /* Mark buffer free */ + POOL_LOCK(&pool->buf_lock); + buf->next = pool->buf_freelist; + pool->buf_freelist = buf; + POOL_UNLOCK(&pool->buf_lock); uint64_t bufcount = odp_atomic_fetch_add_u32(&pool->bufcount, 1) + 1; diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index ef7d7ec..cbe3fcb 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -83,6 +83,8 @@ int odp_pool_init_global(void) /* init locks */ pool_entry_t *pool = &pool_tbl->pool[i]; POOL_LOCK_INIT(&pool->s.lock); + POOL_LOCK_INIT(&pool->s.buf_lock); + POOL_LOCK_INIT(&pool->s.blk_lock); pool->s.pool_hdl = pool_index_to_handle(i); pool->s.pool_id = i; pool_entry_ptr[i] = pool; @@ -336,10 +338,8 @@ odp_pool_t odp_pool_create(const char *name, pool->s.pool_mdata_addr = mdata_base_addr; pool->s.buf_stride = buf_stride; - _odp_atomic_ptr_store(&pool->s.buf_freelist, NULL, - _ODP_MEMMODEL_RLX); - _odp_atomic_ptr_store(&pool->s.blk_freelist, NULL, - _ODP_MEMMODEL_RLX); + pool->s.buf_freelist = NULL; + pool->s.blk_freelist = NULL; /* Initialization will increment these to their target vals */ odp_atomic_store_u32(&pool->s.bufcount, 0);