From patchwork Mon Nov 2 13:43:00 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wallen, Carl \(Nokia - FI/Espoo\)" X-Patchwork-Id: 55898 Delivered-To: patch@linaro.org Received: by 10.112.61.134 with SMTP id p6csp1275741lbr; Mon, 2 Nov 2015 05:43:18 -0800 (PST) X-Received: by 10.50.62.148 with SMTP id y20mr10431805igr.80.1446471798615; Mon, 02 Nov 2015 05:43:18 -0800 (PST) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id b7si12034110igl.63.2015.11.02.05.43.18; Mon, 02 Nov 2015 05:43:18 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 1D98261A4D; Mon, 2 Nov 2015 13:43:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id AF2166195E; Mon, 2 Nov 2015 13:43:10 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id D74E861971; Mon, 2 Nov 2015 13:43:07 +0000 (UTC) Received: from demumfd002.nsn-inter.net (demumfd002.nsn-inter.net [93.183.12.31]) by lists.linaro.org (Postfix) with ESMTPS id 7828961941 for ; Mon, 2 Nov 2015 13:43:02 +0000 (UTC) Received: from demuprx017.emea.nsn-intra.net ([10.150.129.56]) by demumfd002.nsn-inter.net (8.15.2/8.15.2) with ESMTPS id tA2Dh0vi011345 (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Mon, 2 Nov 2015 13:43:00 GMT Received: from DEMUHTC003.nsn-intra.net ([10.159.42.34]) by demuprx017.emea.nsn-intra.net (8.12.11.20060308/8.12.11) with ESMTP id tA2Dh0ot023024 (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=FAIL); Mon, 2 Nov 2015 14:43:00 +0100 Received: from DEMUMBX012.nsn-intra.net ([169.254.12.87]) by DEMUHTC003.nsn-intra.net ([10.159.42.34]) with mapi id 14.03.0248.002; Mon, 2 Nov 2015 14:43:00 +0100 From: "Wallen, Carl (Nokia - FI/Espoo)" To: EXT Bill Fischofer , "lng-odp@lists.linaro.org" Thread-Topic: [API-NEXT PATCHv2] linux-generic: queue: yield trying to obtain multiple locks Thread-Index: AQHRE1IydvuIZ9uagU2u1KQ5DKRBoZ6IwK1Q Date: Mon, 2 Nov 2015 13:43:00 +0000 Message-ID: <27228BE6FE51F54FBE6396D376D2BD711155263D@DEMUMBX012.nsn-intra.net> References: <1446237185-13264-1-git-send-email-bill.fischofer@linaro.org> In-Reply-To: <1446237185-13264-1-git-send-email-bill.fischofer@linaro.org> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.159.42.105] MIME-Version: 1.0 X-purgate-type: clean X-purgate-Ad: Categorized by eleven eXpurgate (R) http://www.eleven.de X-purgate: clean X-purgate: This mail is considered clean (visit http://www.eleven.de for further information) X-purgate-size: 4062 X-purgate-ID: 151667::1446471780-00006110-E211CB67/0/0 X-Topics: patch Subject: Re: [lng-odp] [API-NEXT PATCHv2] linux-generic: queue: yield trying to obtain multiple locks X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" Reviewed-by: Carl Wallen -----Original Message----- From: EXT Bill Fischofer [mailto:bill.fischofer@linaro.org] Sent: Friday, October 30, 2015 10:33 PM To: lng-odp@lists.linaro.org; Wallen, Carl (Nokia - FI/Espoo) Cc: Bill Fischofer Subject: [API-NEXT PATCHv2] linux-generic: queue: yield trying to obtain multiple locks To avoid deadlock, especially on a single core, force an explicit yield while not holding either lock when attempting to acquire multiple locks for ordered queue processing. Also handle enqueues to self as in this case the origin and target queues share a single lock. This addresses the aspect of Bug https://bugs.linaro.org/show_bug.cgi?id=1879 relating to deadlock in unicore systems. Signed-off-by: Bill Fischofer --- platform/linux-generic/odp_queue.c | 47 ++++++++++++++++++++++++++++++-------- 1 file changed, 38 insertions(+), 9 deletions(-) diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index a27af0b..4366683 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -48,6 +48,36 @@ typedef struct queue_table_t { static queue_table_t *queue_tbl; +static inline void get_qe_locks(queue_entry_t *qe1, queue_entry_t *qe2) +{ + int i; + + /* Special case: enq to self */ + if (qe1 == qe2) { + LOCK(&qe1->s.lock); + return; + } + + /* Enq to another queue. Issue is that since any queue can be either + * origin or target we can't have a static lock hierarchy. Strategy is + * to get one lock then attempt to get the other. If the second lock + * attempt fails, release the first and try again. Note that in single + * CPU mode we require the explicit yield since otherwise we may never + * resolve unless the scheduler happens to timeslice exactly when we + * hold no lock. + */ + while (1) { + for (i = 0; i < 10; i++) { + LOCK(&qe1->s.lock); + if (LOCK_TRY(&qe2->s.lock)) + return; + UNLOCK(&qe1->s.lock); + odp_sync_stores(); + } + sched_yield(); + } +} + queue_entry_t *get_qentry(uint32_t queue_id) { return &queue_tbl->queue[queue_id]; @@ -370,14 +400,11 @@ int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, int sustain) /* Need two locks for enq operations from ordered queues */ if (origin_qe) { - LOCK(&origin_qe->s.lock); - while (!LOCK_TRY(&queue->s.lock)) { - UNLOCK(&origin_qe->s.lock); - LOCK(&origin_qe->s.lock); - } + get_qe_locks(origin_qe, queue); if (odp_unlikely(origin_qe->s.status < QUEUE_STATUS_READY)) { UNLOCK(&queue->s.lock); - UNLOCK(&origin_qe->s.lock); + if (origin_qe != queue) + UNLOCK(&origin_qe->s.lock); ODP_ERR("Bad origin queue status\n"); ODP_ERR("queue = %s, origin q = %s, buf = %p\n", queue->s.name, origin_qe->s.name, buf_hdr); @@ -389,7 +416,7 @@ int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, int sustain) if (odp_unlikely(queue->s.status < QUEUE_STATUS_READY)) { UNLOCK(&queue->s.lock); - if (origin_qe) + if (origin_qe && origin_qe != queue) UNLOCK(&origin_qe->s.lock); ODP_ERR("Bad queue status\n"); return -1; @@ -405,7 +432,8 @@ int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, int sustain) * we're done here. */ UNLOCK(&queue->s.lock); - UNLOCK(&origin_qe->s.lock); + if (origin_qe != queue) + UNLOCK(&origin_qe->s.lock); return 0; } @@ -477,7 +505,8 @@ int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, int sustain) /* Now handle any unblocked complete buffers destined for * other queues, appending placeholder bufs as needed. */ - UNLOCK(&queue->s.lock); + if (origin_qe != queue) + UNLOCK(&queue->s.lock); reorder_complete(origin_qe, &reorder_buf, &placeholder_buf, 1, 0); UNLOCK(&origin_qe->s.lock);