From patchwork Tue Nov 10 04:20:09 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 56281 Delivered-To: patch@linaro.org Received: by 10.112.155.196 with SMTP id vy4csp550716lbb; Mon, 9 Nov 2015 20:25:09 -0800 (PST) X-Received: by 10.50.13.6 with SMTP id d6mr1950982igc.18.1447129509003; Mon, 09 Nov 2015 20:25:09 -0800 (PST) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id l8si14071592igv.76.2015.11.09.20.25.08; Mon, 09 Nov 2015 20:25:08 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dkim=neutral (body hash did not verify) header.i=@linaro_org.20150623.gappssmtp.com Received: by lists.linaro.org (Postfix, from userid 109) id 757FC61CE2; Tue, 10 Nov 2015 04:25:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-1.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_BL_SPAMCOP_NET, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, T_DKIM_INVALID,URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 245B961D0D; Tue, 10 Nov 2015 04:21:32 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 9362D61D10; Tue, 10 Nov 2015 04:21:27 +0000 (UTC) Received: from mail-pa0-f41.google.com (mail-pa0-f41.google.com [209.85.220.41]) by lists.linaro.org (Postfix) with ESMTPS id EE72061CE5 for ; Tue, 10 Nov 2015 04:20:29 +0000 (UTC) Received: by pacdm15 with SMTP id dm15so197099705pac.3 for ; Mon, 09 Nov 2015 20:20:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro_org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tdLPEYIxwckugkjO2tzXXo7e4VgQ79lVGPvBqWMNPMU=; b=olNBWMmbhU++ssQmP2WxwrgYYTuIBK7P0NxuFohPlykUo6zNNn0iHquT4sOd6UXX3t +VakKvao7tMYVI6SjjJvAuFbawC/aeZySm79Z7Hx+pfk/RuFgr8jtrg35LkfscPZFomj L0iIBE2VUQJNXEDZFHBQW/yowZjkeho/mz+M+GFRReM0gZgqIytLVx2JeDLCWsbkQgq2 fYGosoZikCH70DWlQCKjAoGpawNJQSo4v2wmy2VLAyHdqY5XKa2TIkwWtpwSS1oSG3jN /la4c1D0RF5v23y66qNNCbHh8O4xaEMpD7fHwnqo66K9E3GG6obTkBpFedEUUsFBYsM0 lRhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tdLPEYIxwckugkjO2tzXXo7e4VgQ79lVGPvBqWMNPMU=; b=VUiiQq83HxpfgHwJvHPZ0EZV87jf5pN1OqyK5mlPhQiV1lXSVsDr9kU7XRat5YloFo gju3mri1Q/9ydQMMpm8Likh3m9yaeo/xUFaq2k76ODRzaIjhNArx/PC+1J1BcjvjwXOz 3A7xHkDgJCeTn3MXVnrQ/aIUPV04Sq+MMwNnYLJU2ziNOiBpxfVe3hgWNNWtrX3IHXoO z/WhT7JY+KeVS2a+DExOVYSCjq5WI0BSd+nudVAr9DWcLKgfGFnBnmL3v4frXScJuynm 1HxiJGkq9Oyht5Vu9/Gw5a0j9XJkoI69Y8Det5bWMGYxvbevsPYZ6RALZhYm6vXo9zOD c4yg== X-Gm-Message-State: ALoCoQl1WwYHF5dmZp/6+TygPGx1D3epu3CjJJYOfaFEf3d2BYr0FZsLHekangznnjRsvdXkTX6z X-Received: by 10.66.251.193 with SMTP id zm1mr2482319pac.154.1447129229271; Mon, 09 Nov 2015 20:20:29 -0800 (PST) Received: from Ubuntu15.localdomain ([40.139.248.3]) by smtp.gmail.com with ESMTPSA id fl5sm1137315pbd.70.2015.11.09.20.20.28 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 09 Nov 2015 20:20:28 -0800 (PST) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Mon, 9 Nov 2015 20:20:09 -0800 Message-Id: <1447129211-9095-7-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1447129211-9095-1-git-send-email-bill.fischofer@linaro.org> References: <1447129211-9095-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [API-NEXT PATCHv3 6/8] linux-generic: queue: streamline and correct release_order() routine X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" Resolve the corner case of releasing order for an order that still has events on the reorder queue. This also allows the reorder_complete() routine to be streamlined. This patch resolves Bug https://bugs.linaro.org/show_bug.cgi?id=1879 Signed-off-by: Bill Fischofer --- .../linux-generic/include/odp_queue_internal.h | 5 +- platform/linux-generic/odp_queue.c | 57 +++++++++++++++++----- 2 files changed, 48 insertions(+), 14 deletions(-) diff --git a/platform/linux-generic/include/odp_queue_internal.h b/platform/linux-generic/include/odp_queue_internal.h index 6120740..a70044b 100644 --- a/platform/linux-generic/include/odp_queue_internal.h +++ b/platform/linux-generic/include/odp_queue_internal.h @@ -335,8 +335,7 @@ static inline int reorder_deq(queue_entry_t *queue, static inline void reorder_complete(queue_entry_t *origin_qe, odp_buffer_hdr_t **reorder_buf_return, odp_buffer_hdr_t **placeholder_buf, - int placeholder_append, - int order_released) + int placeholder_append) { odp_buffer_hdr_t *reorder_buf = origin_qe->s.reorder_head; odp_buffer_hdr_t *next_buf; @@ -356,7 +355,7 @@ static inline void reorder_complete(queue_entry_t *origin_qe, reorder_buf = next_buf; order_release(origin_qe, 1); - } else if (!order_released && reorder_buf->flags.sustain) { + } else if (reorder_buf->flags.sustain) { reorder_buf = next_buf; } else { *reorder_buf_return = origin_qe->s.reorder_head; diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index 9cab9b2..a5e60d7 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -39,6 +39,11 @@ #include +#define RESOLVE_ORDER 0 +#define SUSTAIN_ORDER 1 + +#define NOAPPEND 0 +#define APPEND 1 typedef struct queue_table_t { queue_entry_t queue[ODP_CONFIG_QUEUES]; @@ -521,8 +526,7 @@ int ordered_queue_enq(queue_entry_t *queue, if (sched && schedule_queue(queue)) ODP_ABORT("schedule_queue failed\n"); - reorder_complete(origin_qe, &reorder_buf, &placeholder_buf, - 1, 0); + reorder_complete(origin_qe, &reorder_buf, &placeholder_buf, APPEND); UNLOCK(&origin_qe->s.lock); if (reorder_buf) @@ -606,7 +610,8 @@ int odp_queue_enq_multi(odp_queue_t handle, const odp_event_t ev[], int num) for (i = 0; i < num; i++) buf_hdr[i] = odp_buf_to_hdr(odp_buffer_from_event(ev[i])); - return num == 0 ? 0 : queue->s.enqueue_multi(queue, buf_hdr, num, 1); + return num == 0 ? 0 : queue->s.enqueue_multi(queue, buf_hdr, + num, SUSTAIN_ORDER); } int odp_queue_enq(odp_queue_t handle, odp_event_t ev) @@ -620,7 +625,7 @@ int odp_queue_enq(odp_queue_t handle, odp_event_t ev) /* No chains via this entry */ buf_hdr->link = NULL; - return queue->s.enqueue(queue, buf_hdr, 1); + return queue->s.enqueue(queue, buf_hdr, SUSTAIN_ORDER); } int queue_enq_internal(odp_buffer_hdr_t *buf_hdr) @@ -660,7 +665,7 @@ odp_buffer_hdr_t *queue_deq(queue_entry_t *queue) buf_hdr->sync[i] = odp_atomic_fetch_inc_u64(&queue->s.sync_in[i]); } - buf_hdr->flags.sustain = 0; + buf_hdr->flags.sustain = SUSTAIN_ORDER; } else { buf_hdr->origin_qe = NULL; } @@ -713,7 +718,7 @@ int queue_deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) odp_atomic_fetch_inc_u64 (&queue->s.sync_in[j]); } - buf_hdr[i]->flags.sustain = 0; + buf_hdr[i]->flags.sustain = SUSTAIN_ORDER; } else { buf_hdr[i]->origin_qe = NULL; } @@ -879,7 +884,7 @@ int queue_pktout_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, order_release(origin_qe, release_count + placeholder_count); /* Now handle sends to other queues that are ready to go */ - reorder_complete(origin_qe, &reorder_buf, &placeholder_buf, 1, 0); + reorder_complete(origin_qe, &reorder_buf, &placeholder_buf, APPEND); /* We're fully done with the origin_qe at last */ UNLOCK(&origin_qe->s.lock); @@ -947,13 +952,43 @@ int release_order(queue_entry_t *origin_qe, uint64_t order, odp_buffer_t placeholder_buf; odp_buffer_hdr_t *placeholder_buf_hdr, *reorder_buf, *next_buf; - /* Must tlock the origin queue to process the release */ + /* Must lock the origin queue to process the release */ LOCK(&origin_qe->s.lock); - /* If we are in the order we can release immediately since there can - * be no confusion about intermediate elements + /* If we are in order we can release immediately since there can be no + * confusion about intermediate elements */ if (order <= origin_qe->s.order_out) { + reorder_buf = origin_qe->s.reorder_head; + + /* We're in order, however there may be one or more events on + * the reorder queue that are part of this order. If that is + * the case, remove them and let ordered_queue_enq() handle + * them and resolve the order for us. + */ + if (reorder_buf && reorder_buf->order == order) { + odp_buffer_hdr_t *reorder_head = reorder_buf; + + next_buf = reorder_buf->next; + + while (next_buf && next_buf->order == order) { + reorder_buf = next_buf; + next_buf = next_buf->next; + } + + origin_qe->s.reorder_head = reorder_buf->next; + reorder_buf->next = NULL; + + UNLOCK(&origin_qe->s.lock); + reorder_head->link = reorder_buf->next; + return ordered_queue_enq(reorder_head->target_qe, + reorder_head, RESOLVE_ORDER, + origin_qe, order); + } + + /* Reorder queue has no elements for this order, so it's safe + * to resolve order here + */ order_release(origin_qe, 1); /* Check if this release allows us to unblock waiters. At the @@ -965,7 +1000,7 @@ int release_order(queue_entry_t *origin_qe, uint64_t order, * element(s) on the reorder queue */ reorder_complete(origin_qe, &reorder_buf, - &placeholder_buf_hdr, 0, 1); + &placeholder_buf_hdr, NOAPPEND); /* Now safe to unlock */ UNLOCK(&origin_qe->s.lock);