From patchwork Fri Oct 30 20:33:05 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 55852 Delivered-To: patch@linaro.org Received: by 10.112.61.134 with SMTP id p6csp71762lbr; Fri, 30 Oct 2015 13:33:21 -0700 (PDT) X-Received: by 10.55.72.85 with SMTP id v82mr13272089qka.3.1446237200894; Fri, 30 Oct 2015 13:33:20 -0700 (PDT) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id q188si7420249qha.99.2015.10.30.13.33.20; Fri, 30 Oct 2015 13:33:20 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dkim=neutral (body hash did not verify) header.i=@linaro_org.20150623.gappssmtp.com Received: by lists.linaro.org (Postfix, from userid 109) id 18EA462C69; Fri, 30 Oct 2015 20:33:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_LOW,RCVD_IN_MSPIKE_H2,T_DKIM_INVALID,URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 5A7B061A38; Fri, 30 Oct 2015 20:33:14 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 0A16161B3F; Fri, 30 Oct 2015 20:33:12 +0000 (UTC) Received: from mail-ob0-f171.google.com (mail-ob0-f171.google.com [209.85.214.171]) by lists.linaro.org (Postfix) with ESMTPS id E843C61A27 for ; Fri, 30 Oct 2015 20:33:10 +0000 (UTC) Received: by obbwb3 with SMTP id wb3so53947531obb.0 for ; Fri, 30 Oct 2015 13:33:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro_org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=TuJ0Xnxu9OVTsfIxAKcaj7hkEw2VtyeMavD+vQO8zaA=; b=sYSk0C8VCAL7W7/9ngzJ6ap+tFvhSiWo71mbkrIT1zMi74r1N4nzNYY59eJuvxrI6b tJi7x1MDMosA7O1mv2M3OBaHviSSopbRxpeEo7ohfXoGIFNHpNTw8jJ+vxRRgpDRfJkW PKp+oyZ4Ul7U96l7MaHlX8WqNq+iIq/LDFlwUJPuRpbBH82na6Ign99LxS2n7oCqJpp9 keFAMh7/YxQrfDmIfs8/pm/0e2SdaHgAZsiomYscXoopXjh5lDgcACjXCPu5PI85OXY+ xgfnRV4ZRTdYoRjpJLiR/0ZlpSt2ZrHQRccLL/fI5KznxEEV0NxkPKaB1SPYjpEjk8Fe CWLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=TuJ0Xnxu9OVTsfIxAKcaj7hkEw2VtyeMavD+vQO8zaA=; b=fpFVUHFnaQtsiA1mT3nnsSmKrvRsD+xpvRFgMJqk7W5zNSIol+m7S6P4d2xFsJIkve uRn9dZpvb4c++Fk5NE5avCjALE6KN/64Ew5tIN1MKhXqo6tLoWrwO5aMKJQgpTlhuyq2 +/dAYOrTB01hxobKlWJUh2N7UKeiXhOCwxmdsLXrvFLBNZVOFnGFGOeS6hh0Oy51H1cj 2dIDMmKOTBiCXKXBi2UwUeV5sLiWN2P1Es5lbJvCWLg6tppbSeASu9s5BfRypq9hexbz +iZrADcOWdVjQSy8Lt5C2t88iCgEUbB/xGl5CS6RwMGkyyDYCO4nvVANG5Z3QpzTKykM m2lA== X-Gm-Message-State: ALoCoQm7yuhMrZPRRokGnRKGEd1m6oF3Vi8eIi6OwAAmJKqIEijPcHNuR+m0uY+opWLOSQ+Uk7Sj X-Received: by 10.60.116.39 with SMTP id jt7mr7023991oeb.54.1446237190239; Fri, 30 Oct 2015 13:33:10 -0700 (PDT) Received: from Ubuntu15.localdomain (cpe-66-68-129-43.austin.res.rr.com. [66.68.129.43]) by smtp.gmail.com with ESMTPSA id w139sm3495648oiw.16.2015.10.30.13.33.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 30 Oct 2015 13:33:09 -0700 (PDT) From: Bill Fischofer To: lng-odp@lists.linaro.org, carl.wallen@nokia.com Date: Fri, 30 Oct 2015 15:33:05 -0500 Message-Id: <1446237185-13264-1-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 X-Topics: patch Subject: [lng-odp] [API-NEXT PATCHv2] linux-generic: queue: yield trying to obtain multiple locks X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" To avoid deadlock, especially on a single core, force an explicit yield while not holding either lock when attempting to acquire multiple locks for ordered queue processing. Also handle enqueues to self as in this case the origin and target queues share a single lock. This addresses the aspect of Bug https://bugs.linaro.org/show_bug.cgi?id=1879 relating to deadlock in unicore systems. Signed-off-by: Bill Fischofer --- platform/linux-generic/odp_queue.c | 47 ++++++++++++++++++++++++++++++-------- 1 file changed, 38 insertions(+), 9 deletions(-) diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index a27af0b..4366683 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -48,6 +48,36 @@ typedef struct queue_table_t { static queue_table_t *queue_tbl; +static inline void get_qe_locks(queue_entry_t *qe1, queue_entry_t *qe2) +{ + int i; + + /* Special case: enq to self */ + if (qe1 == qe2) { + LOCK(&qe1->s.lock); + return; + } + + /* Enq to another queue. Issue is that since any queue can be either + * origin or target we can't have a static lock hierarchy. Strategy is + * to get one lock then attempt to get the other. If the second lock + * attempt fails, release the first and try again. Note that in single + * CPU mode we require the explicit yield since otherwise we may never + * resolve unless the scheduler happens to timeslice exactly when we + * hold no lock. + */ + while (1) { + for (i = 0; i < 10; i++) { + LOCK(&qe1->s.lock); + if (LOCK_TRY(&qe2->s.lock)) + return; + UNLOCK(&qe1->s.lock); + odp_sync_stores(); + } + sched_yield(); + } +} + queue_entry_t *get_qentry(uint32_t queue_id) { return &queue_tbl->queue[queue_id]; @@ -370,14 +400,11 @@ int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, int sustain) /* Need two locks for enq operations from ordered queues */ if (origin_qe) { - LOCK(&origin_qe->s.lock); - while (!LOCK_TRY(&queue->s.lock)) { - UNLOCK(&origin_qe->s.lock); - LOCK(&origin_qe->s.lock); - } + get_qe_locks(origin_qe, queue); if (odp_unlikely(origin_qe->s.status < QUEUE_STATUS_READY)) { UNLOCK(&queue->s.lock); - UNLOCK(&origin_qe->s.lock); + if (origin_qe != queue) + UNLOCK(&origin_qe->s.lock); ODP_ERR("Bad origin queue status\n"); ODP_ERR("queue = %s, origin q = %s, buf = %p\n", queue->s.name, origin_qe->s.name, buf_hdr); @@ -389,7 +416,7 @@ int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, int sustain) if (odp_unlikely(queue->s.status < QUEUE_STATUS_READY)) { UNLOCK(&queue->s.lock); - if (origin_qe) + if (origin_qe && origin_qe != queue) UNLOCK(&origin_qe->s.lock); ODP_ERR("Bad queue status\n"); return -1; @@ -405,7 +432,8 @@ int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, int sustain) * we're done here. */ UNLOCK(&queue->s.lock); - UNLOCK(&origin_qe->s.lock); + if (origin_qe != queue) + UNLOCK(&origin_qe->s.lock); return 0; } @@ -477,7 +505,8 @@ int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, int sustain) /* Now handle any unblocked complete buffers destined for * other queues, appending placeholder bufs as needed. */ - UNLOCK(&queue->s.lock); + if (origin_qe != queue) + UNLOCK(&queue->s.lock); reorder_complete(origin_qe, &reorder_buf, &placeholder_buf, 1, 0); UNLOCK(&origin_qe->s.lock);