From patchwork Sat Aug 8 03:03:11 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 52068 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f198.google.com (mail-lb0-f198.google.com [209.85.217.198]) by patches.linaro.org (Postfix) with ESMTPS id 039C020539 for ; Sat, 8 Aug 2015 03:14:09 +0000 (UTC) Received: by lbsm2 with SMTP id m2sf9744398lbs.1 for ; Fri, 07 Aug 2015 20:14:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:in-reply-to:references:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:errors-to :sender:x-original-sender:x-original-authentication-results :mailing-list; bh=sGwMNEdu52yznMUnxNjk/CusCKhxIn0/VUhQKsIf384=; b=cKqi4X3EDAKnr1K5Rerx18XmJcwdb2jASv4W262k8wGkp0UF5hYnz16ZjWd/FsTfuX CxSnOXV4vo3y4A7QiTOAwPgnOdHw2q0/l/QpinuZ4bMjiWZhFrtgV4oxEajU52aR/W7B W+9c3EhX4rDBhYAvy0+TM7YDQk25VJ5B9yhZA4rq2DHRI+SGHEU1gvHcc9Ca5pjFaYYv NhUZtbaAsvx30qopA7dysKhb9Q1tcw6yY0N2rE8LpmtV8QZOzOLeZnsoL06qcpDwHbiH do/qR2o/pA7l+Ib6z2PSrEw8xUUdc9MGhim9DSwSNF2M82XOulNJuqn/FlAnLGWUwtQz bKPw== X-Gm-Message-State: ALoCoQkWqF4OaC7WKOzS/wfXVPFZ4liA57NZCRJVtU19GdILAMZeq/8G03exybn6t1wpi6W6R2Ia X-Received: by 10.152.45.101 with SMTP id l5mr1364223lam.7.1439003647374; Fri, 07 Aug 2015 20:14:07 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.23.103 with SMTP id l7ls550486laf.35.gmail; Fri, 07 Aug 2015 20:14:07 -0700 (PDT) X-Received: by 10.152.36.161 with SMTP id r1mr11028619laj.88.1439003647227; Fri, 07 Aug 2015 20:14:07 -0700 (PDT) Received: from mail-la0-f49.google.com (mail-la0-f49.google.com. [209.85.215.49]) by mx.google.com with ESMTPS id n3si9096763lbg.156.2015.08.07.20.14.07 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 Aug 2015 20:14:07 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) client-ip=209.85.215.49; Received: by lahi9 with SMTP id i9so992508lah.2 for ; Fri, 07 Aug 2015 20:14:07 -0700 (PDT) X-Received: by 10.152.164.130 with SMTP id yq2mr5245513lab.76.1439003647059; Fri, 07 Aug 2015 20:14:07 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.7.198 with SMTP id l6csp719713lba; Fri, 7 Aug 2015 20:14:05 -0700 (PDT) X-Received: by 10.55.51.10 with SMTP id z10mr19688953qkz.38.1439003645510; Fri, 07 Aug 2015 20:14:05 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id 136si21922858qhc.102.2015.08.07.20.14.04; Fri, 07 Aug 2015 20:14:05 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id B9EEC61D8D; Sat, 8 Aug 2015 03:14:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from ip-10-142-244-252.ec2.internal (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id B28946239C; Sat, 8 Aug 2015 03:05:19 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 5223962392; Sat, 8 Aug 2015 03:05:12 +0000 (UTC) Received: from mail-oi0-f51.google.com (mail-oi0-f51.google.com [209.85.218.51]) by lists.linaro.org (Postfix) with ESMTPS id 71935623A5 for ; Sat, 8 Aug 2015 03:03:29 +0000 (UTC) Received: by oip136 with SMTP id 136so63010445oip.1 for ; Fri, 07 Aug 2015 20:03:29 -0700 (PDT) X-Received: by 10.202.187.138 with SMTP id l132mr9385299oif.31.1439003008891; Fri, 07 Aug 2015 20:03:28 -0700 (PDT) Received: from localhost.localdomain (cpe-24-28-70-239.austin.res.rr.com. [24.28.70.239]) by smtp.gmail.com with ESMTPSA id kp7sm7375667oeb.14.2015.08.07.20.03.28 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 07 Aug 2015 20:03:28 -0700 (PDT) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Fri, 7 Aug 2015 22:03:11 -0500 Message-Id: <1439002992-29285-14-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1439002992-29285-1-git-send-email-bill.fischofer@linaro.org> References: <1439002992-29285-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [API-NEXT PATCHv10 13/14] linux-generic: schedule: implement odp_schedule_order_lock/unlock X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Bill Fischofer --- .../include/odp/plat/schedule_types.h | 2 - .../linux-generic/include/odp_buffer_internal.h | 5 ++- .../linux-generic/include/odp_queue_internal.h | 2 + platform/linux-generic/odp_queue.c | 48 ++++++++++++++++++++-- 4 files changed, 51 insertions(+), 6 deletions(-) diff --git a/platform/linux-generic/include/odp/plat/schedule_types.h b/platform/linux-generic/include/odp/plat/schedule_types.h index f13bfab..3665fec 100644 --- a/platform/linux-generic/include/odp/plat/schedule_types.h +++ b/platform/linux-generic/include/odp/plat/schedule_types.h @@ -52,8 +52,6 @@ typedef int odp_schedule_group_t; #define ODP_SCHED_GROUP_NAME_LEN 32 -typedef int odp_schedule_olock_t; - /** * @} */ diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index c9b8409..ddd2642 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -140,7 +140,10 @@ typedef struct odp_buffer_hdr_t { void *addr[ODP_BUFFER_MAX_SEG]; /* block addrs */ uint64_t order; /* sequence for ordered queues */ queue_entry_t *origin_qe; /* ordered queue origin */ - queue_entry_t *target_qe; /* ordered queue target */ + union { + queue_entry_t *target_qe; /* ordered queue target */ + uint64_t sync; /* for ordered synchronization */ + }; } odp_buffer_hdr_t; /** @internal Compile time assert that the diff --git a/platform/linux-generic/include/odp_queue_internal.h b/platform/linux-generic/include/odp_queue_internal.h index 9cca552..aa36df5 100644 --- a/platform/linux-generic/include/odp_queue_internal.h +++ b/platform/linux-generic/include/odp_queue_internal.h @@ -81,6 +81,8 @@ struct queue_entry_s { uint64_t order_out; odp_buffer_hdr_t *reorder_head; odp_buffer_hdr_t *reorder_tail; + odp_atomic_u64_t sync_in; + odp_atomic_u64_t sync_out; }; typedef union queue_entry_u { diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index ec1f797..2d999aa 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -22,6 +22,7 @@ #include #include #include +#include #ifdef USE_TICKETLOCK #include @@ -122,6 +123,8 @@ int odp_queue_init_global(void) /* init locks */ queue_entry_t *queue = get_qentry(i); LOCK_INIT(&queue->s.lock); + odp_atomic_init_u64(&queue->s.sync_in, 0); + odp_atomic_init_u64(&queue->s.sync_out, 0); queue->s.handle = queue_from_id(i); } @@ -404,8 +407,10 @@ int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr) } /* We're in order, so account for this and proceed with enq */ - if (!buf_hdr->flags.sustain) + if (!buf_hdr->flags.sustain) { origin_qe->s.order_out++; + odp_atomic_fetch_inc_u64(&origin_qe->s.sync_out); + } /* if this element is linked, restore the linked chain */ buf_tail = buf_hdr->link; @@ -524,6 +529,8 @@ int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr) /* Reflect the above two in the output sequence */ origin_qe->s.order_out += release_count + placeholder_count; + odp_atomic_fetch_add_u64(&origin_qe->s.sync_out, + release_count + placeholder_count); /* Now handle any unblocked buffers destined for other queues */ UNLOCK(&queue->s.lock); @@ -689,7 +696,8 @@ odp_buffer_hdr_t *queue_deq(queue_entry_t *queue) buf_hdr->next = NULL; if (queue->s.param.sched.sync == ODP_SCHED_SYNC_ORDERED) { buf_hdr->origin_qe = queue; - buf_hdr->order = queue->s.order_in++; + buf_hdr->order = queue->s.order_in++; + buf_hdr->sync = odp_atomic_fetch_inc_u64(&queue->s.sync_in); buf_hdr->flags.sustain = 0; } else { buf_hdr->origin_qe = NULL; @@ -737,6 +745,8 @@ int queue_deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) if (queue->s.param.sched.sync == ODP_SCHED_SYNC_ORDERED) { buf_hdr[i]->origin_qe = queue; buf_hdr[i]->order = queue->s.order_in++; + buf_hdr[i]->sync = + odp_atomic_fetch_inc_u64(&queue->s.sync_in); buf_hdr[i]->flags.sustain = 0; } else { buf_hdr[i]->origin_qe = NULL; @@ -829,8 +839,10 @@ int odp_schedule_release_ordered(odp_event_t ev) */ if (buf_hdr->order <= origin_qe->s.order_out + 1) { buf_hdr->origin_qe = NULL; - if (!buf_hdr->flags.sustain) + if (!buf_hdr->flags.sustain) { origin_qe->s.order_out++; + odp_atomic_fetch_inc_u64(&origin_qe->s.sync_out); + } /* check if this release allows us to unblock waiters */ reorder_buf = origin_qe->s.reorder_head; @@ -911,8 +923,38 @@ int odp_schedule_order_copy(odp_event_t src_event, odp_event_t dst_event) dst->origin_qe = origin_qe; dst->order = src->order; + dst->sync = src->sync; src->flags.sustain = 1; UNLOCK(&origin_qe->s.lock); return 0; } + +void odp_schedule_order_lock(odp_event_t ev) +{ + odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(odp_buffer_from_event(ev)); + queue_entry_t *origin_qe = buf_hdr->origin_qe; + + /* Wait until we are in order. Note that sync_out will be incremented + * both by unlocks as well as order resolution, so we're OK if only + * some events in the ordered flow need to lock. + */ + while (buf_hdr->sync > odp_atomic_load_u64(&origin_qe->s.sync_out)) + odp_spin(); +} + +void odp_schedule_order_unlock(odp_event_t ev) +{ + odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(odp_buffer_from_event(ev)); + queue_entry_t *origin_qe = buf_hdr->origin_qe; + + /* Get a new sync order for reusability, and release the lock. Note + * that this must be done in this sequence to prevent race conditions + * where the next waiter could lock and unlock before we're able to + * get a new sync order since that would cause order inversion on + * subsequent locks we may perform on this event in this ordered + * context. + */ + buf_hdr->sync = odp_atomic_fetch_inc_u64(&origin_qe->s.sync_in); + odp_atomic_fetch_inc_u64(&origin_qe->s.sync_out); +}