From patchwork Tue Oct 20 14:12:03 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stuart Haslam X-Patchwork-Id: 55308 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lf0-f71.google.com (mail-lf0-f71.google.com [209.85.215.71]) by patches.linaro.org (Postfix) with ESMTPS id C51CA23024 for ; Tue, 20 Oct 2015 14:12:35 +0000 (UTC) Received: by lffv3 with SMTP id v3sf5770168lff.1 for ; Tue, 20 Oct 2015 07:12:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:date:from:to :message-id:references:mime-version:content-disposition:in-reply-to :user-agent:cc:subject:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=ybJarfTDQLQs99r9o+5UBJ+zaIEDgKahulhuou56dtw=; b=kKcTESm3fAsQ1t1bMGuqCPobWk5BHXxcm7M9JNWAVd+LShEYvtdKvzsloDwMHKiaxW Lx9SVlVibGwyxtd3CxutXBVeSOuloxH3ZcuLUP3Nd97dm5wMJduk9eluGcQP/NX7GPmr yoDGryNbBS9ttT0knGR4ZfBudUWtx+OuiH8dOduadAGng87lZUfHLUJpO735aqtA5EM/ c3I/4DPSHW7bXMuBvW4PuqCfdUTf73VE3JLFznM6jCoMM9iRASROzzpt4tXuLj990+Pp avsT1AxIuIFnNE9jBrewm1lO+xbt/Kw38ppGrVPIHVtOVqqhqXJlA+BmOBHZBsKRAR3D tOgg== X-Gm-Message-State: ALoCoQkpKwE0zZQtKg1bwcar2kspAT4sl9ET/UKdVlSmhnF/VJRa8o/99Qgn2G+Zd6YUSl2X3H2A X-Received: by 10.112.27.134 with SMTP id t6mr698671lbg.13.1445350354168; Tue, 20 Oct 2015 07:12:34 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.25.169.198 with SMTP id s189ls70022lfe.87.gmail; Tue, 20 Oct 2015 07:12:34 -0700 (PDT) X-Received: by 10.25.146.12 with SMTP id u12mr1294222lfd.50.1445350354014; Tue, 20 Oct 2015 07:12:34 -0700 (PDT) Received: from mail-lb0-f176.google.com (mail-lb0-f176.google.com. [209.85.217.176]) by mx.google.com with ESMTPS id xv5si1300488lbb.38.2015.10.20.07.12.33 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Oct 2015 07:12:33 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.176 as permitted sender) client-ip=209.85.217.176; Received: by lbbec13 with SMTP id ec13so13908954lbb.0 for ; Tue, 20 Oct 2015 07:12:33 -0700 (PDT) X-Received: by 10.112.198.69 with SMTP id ja5mr827056lbc.106.1445350353812; Tue, 20 Oct 2015 07:12:33 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp2107964lbq; Tue, 20 Oct 2015 07:12:32 -0700 (PDT) X-Received: by 10.55.203.132 with SMTP id u4mr4082509qkl.71.1445350352841; Tue, 20 Oct 2015 07:12:32 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id 203si2860465qhu.22.2015.10.20.07.12.31; Tue, 20 Oct 2015 07:12:32 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id 802EA61D90; Tue, 20 Oct 2015 14:12:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 4F9FA61D84; Tue, 20 Oct 2015 14:12:18 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 8F6E061D88; Tue, 20 Oct 2015 14:12:12 +0000 (UTC) Received: from mail-wi0-f174.google.com (mail-wi0-f174.google.com [209.85.212.174]) by lists.linaro.org (Postfix) with ESMTPS id 35334610E1 for ; Tue, 20 Oct 2015 14:12:06 +0000 (UTC) Received: by wicll6 with SMTP id ll6so30560743wic.1 for ; Tue, 20 Oct 2015 07:12:05 -0700 (PDT) X-Received: by 10.194.156.195 with SMTP id wg3mr5044702wjb.2.1445350325331; Tue, 20 Oct 2015 07:12:05 -0700 (PDT) Received: from localhost ([2001:41d0:a:3cb4::abcd]) by smtp.gmail.com with ESMTPSA id en2sm3137800wib.5.2015.10.20.07.12.04 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Tue, 20 Oct 2015 07:12:04 -0700 (PDT) Date: Tue, 20 Oct 2015 15:12:03 +0100 From: Stuart Haslam To: Maxim Uvarov Message-ID: <20151020141203.GA27190@localhost> References: <1445270883-8069-1-git-send-email-maxim.uvarov@linaro.org> <1445270883-8069-8-git-send-email-maxim.uvarov@linaro.org> <20151020124147.GC3443@localhost> <56264746.9010402@linaro.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <56264746.9010402@linaro.org> User-Agent: Mutt/1.5.21 (2010-09-15) X-Topics: patch Cc: lng-odp@lists.linaro.org Subject: Re: [lng-odp] [PATCHv10 7/8] linux-generic: add ipc pktio support X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: stuart.haslam@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.176 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 On Tue, Oct 20, 2015 at 04:53:10PM +0300, Maxim Uvarov wrote: > > slave: > > On 10/20/2015 15:41, Stuart Haslam wrote: > >>+ /* recv() rings */ > >>>+ pktio_entry->s.ipc.recv.r = pktio_entry->s.ipc.m.prod; > >>>+ pktio_entry->s.ipc.recv.r_p = pktio_entry->s.ipc.m.cons; > >>>+ /* tx() rings */ > >>>+ pktio_entry->s.ipc.tx.r = pktio_entry->s.ipc.s.prod; > >>>+ pktio_entry->s.ipc.tx.r_p = pktio_entry->s.ipc.s.cons; > >>>+ > >This isn't exactly what I had in mind, can't you just use these names > >directly in the first place in the code above? Is there any reason to > >retain two copies of the pointers? > > > master: > > + /* recv() rings */ > + pktio_entry->s.ipc.recv.r = pktio_entry->s.ipc.s.prod; > + pktio_entry->s.ipc.recv.r_p = pktio_entry->s.ipc.s.cons; > + /* tx() rings */ > + pktio_entry->s.ipc.tx.r = pktio_entry->s.ipc.m.prod; > + pktio_entry->s.ipc.tx.r_p = pktio_entry->s.ipc.m.cons; > + > > > They are crossed. Instead of implementing 2 function slave_recv() > master_recv() > I add added one and this links. > > But I can rename rings something like: > > pktio_entry->s.ipc.local.prod > pktio_entry->s.ipc.local.cons > pktio_entry->s.ipc.remote.prod > pktio_entry->s.ipc.remote.cons > > local - will reflect local pool/ring mappings; > remote - for remote process poll/ring mappings; > > Maxim. I don't think we're understanding each other, I mean like this (based on top of this series): diff --git a/platform/linux-generic/include/odp_packet_io_internal.h b/platform/linux-generic/include/odp_packet_io_internal.h index e25e747..5ee3fd3 100644 --- a/platform/linux-generic/include/odp_packet_io_internal.h +++ b/platform/linux-generic/include/odp_packet_io_internal.h @@ -42,31 +42,22 @@ typedef struct { typedef struct { /* TX */ struct { - odph_ring_t *prod; /**< ODP ring for IPC msg packets + odph_ring_t *send; /**< ODP ring for IPC msg packets indexes transmitted to shared memory */ - odph_ring_t *cons; /**< ODP ring for IPC msg packets + odph_ring_t *free; /**< ODP ring for IPC msg packets indexes already processed by remote process */ - } m; /* master */ + } tx; /* RX */ struct { - odph_ring_t *prod; /**< ODP ring for IPC msg packets + odph_ring_t *recv; /**< ODP ring for IPC msg packets indexes received from shared memory (from remote process) */ - odph_ring_t *cons; /**< ODP ring for IPC msg packets + odph_ring_t *free; /**< ODP ring for IPC msg packets indexes already processed by current process */ - } s; /* slave */ - struct { - odph_ring_t *r; /**< ring to receive from */ - odph_ring_t *r_p; /**< after recv is done place packet descr to - that produced ring */ - } recv; /* remapped above rings for easy usage in recv() */ - struct { - odph_ring_t *r; /**< ring to transmit packets */ - odph_ring_t *r_p; /**< ring with already transmitted packets */ - } tx; /* remapped above rings for easy usage in send() */ + } rx; /* slave */ void *pool_base; /**< Remote pool base addr */ void *pool_mdata_base; /**< Remote pool mdata base addr */ uint64_t pkt_size; /**< Packet size in remote pool */ diff --git a/platform/linux-generic/pktio/ipc.c b/platform/linux-generic/pktio/ipc.c index 835de17..791ad1f 100644 --- a/platform/linux-generic/pktio/ipc.c +++ b/platform/linux-generic/pktio/ipc.c @@ -116,13 +116,6 @@ static int _ipc_master_post_init(pktio_entry_t *pktio_entry) pktio_entry->s.ipc.pool_mdata_base = (char *)ipc_pool_base + pinfo->slave.mdata_offset; - /* recv() rings */ - pktio_entry->s.ipc.recv.r = pktio_entry->s.ipc.s.prod; - pktio_entry->s.ipc.recv.r_p = pktio_entry->s.ipc.s.cons; - /* tx() rings */ - pktio_entry->s.ipc.tx.r = pktio_entry->s.ipc.m.prod; - pktio_entry->s.ipc.tx.r_p = pktio_entry->s.ipc.m.cons; - odp_atomic_store_u32(&pktio_entry->s.ipc.ready, 1); ODP_DBG("Post init... DONE.\n"); @@ -152,59 +145,59 @@ static int _ipc_init_master(pktio_entry_t *pktio_entry, * to be processed packets ring. */ snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_prod", dev); - pktio_entry->s.ipc.m.prod = odph_ring_create(ipc_shm_name, + pktio_entry->s.ipc.tx.send = odph_ring_create(ipc_shm_name, PKTIO_IPC_ENTRIES, ODPH_RING_SHM_PROC | ODPH_RING_NO_LIST); - if (!pktio_entry->s.ipc.m.prod) { + if (!pktio_entry->s.ipc.tx.send) { ODP_DBG("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); return -1; } ODP_DBG("Created IPC ring: %s, count %d, free %d\n", - ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.m.prod), - odph_ring_free_count(pktio_entry->s.ipc.m.prod)); + ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.tx.send), + odph_ring_free_count(pktio_entry->s.ipc.tx.send)); /* generate name in shm like ipc_pktio_p for * already processed packets */ snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_cons", dev); - pktio_entry->s.ipc.m.cons = odph_ring_create(ipc_shm_name, + pktio_entry->s.ipc.tx.free = odph_ring_create(ipc_shm_name, PKTIO_IPC_ENTRIES, ODPH_RING_SHM_PROC | ODPH_RING_NO_LIST); - if (!pktio_entry->s.ipc.m.cons) { + if (!pktio_entry->s.ipc.tx.free) { ODP_DBG("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); goto free_m_prod; } ODP_DBG("Created IPC ring: %s, count %d, free %d\n", - ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.m.cons), - odph_ring_free_count(pktio_entry->s.ipc.m.cons)); + ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.tx.free), + odph_ring_free_count(pktio_entry->s.ipc.tx.free)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_prod", dev); - pktio_entry->s.ipc.s.prod = odph_ring_create(ipc_shm_name, + pktio_entry->s.ipc.rx.recv = odph_ring_create(ipc_shm_name, PKTIO_IPC_ENTRIES, ODPH_RING_SHM_PROC | ODPH_RING_NO_LIST); - if (!pktio_entry->s.ipc.s.prod) { + if (!pktio_entry->s.ipc.rx.recv) { ODP_DBG("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); goto free_m_cons; } ODP_DBG("Created IPC ring: %s, count %d, free %d\n", - ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.s.prod), - odph_ring_free_count(pktio_entry->s.ipc.s.prod)); + ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.rx.recv), + odph_ring_free_count(pktio_entry->s.ipc.rx.recv)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_cons", dev); - pktio_entry->s.ipc.s.cons = odph_ring_create(ipc_shm_name, + pktio_entry->s.ipc.rx.free = odph_ring_create(ipc_shm_name, PKTIO_IPC_ENTRIES, ODPH_RING_SHM_PROC | ODPH_RING_NO_LIST); - if (!pktio_entry->s.ipc.s.cons) { + if (!pktio_entry->s.ipc.rx.free) { ODP_DBG("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); goto free_s_prod; } ODP_DBG("Created IPC ring: %s, count %d, free %d\n", - ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.s.cons), - odph_ring_free_count(pktio_entry->s.ipc.s.cons)); + ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.rx.free), + odph_ring_free_count(pktio_entry->s.ipc.rx.free)); /* Set up pool name for remote info */ pinfo = pktio_entry->s.ipc.pinfo; @@ -316,49 +309,49 @@ static int _ipc_slave_post_init(pktio_entry_t *pktio_entry) const char *dev = pktio_entry->s.name; snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_prod", dev); - pktio_entry->s.ipc.m.prod = _ipc_shm_map(ipc_shm_name, ring_size); - if (!pktio_entry->s.ipc.m.prod) { + pktio_entry->s.ipc.rx.recv = _ipc_shm_map(ipc_shm_name, ring_size); + if (!pktio_entry->s.ipc.rx.recv) { ODP_DBG("pid %d unable to find ipc ring %s name\n", getpid(), dev); sleep(1); return -1; } ODP_DBG("Connected IPC ring: %s, count %d, free %d\n", - ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.m.prod), - odph_ring_free_count(pktio_entry->s.ipc.m.prod)); + ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.rx.recv), + odph_ring_free_count(pktio_entry->s.ipc.rx.recv)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_cons", dev); - pktio_entry->s.ipc.m.cons = _ipc_shm_map(ipc_shm_name, ring_size); - if (!pktio_entry->s.ipc.m.cons) { + pktio_entry->s.ipc.rx.free = _ipc_shm_map(ipc_shm_name, ring_size); + if (!pktio_entry->s.ipc.rx.free) { ODP_DBG("pid %d unable to find ipc ring %s name\n", getpid(), dev); goto free_m_prod; } ODP_DBG("Connected IPC ring: %s, count %d, free %d\n", - ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.m.cons), - odph_ring_free_count(pktio_entry->s.ipc.m.cons)); + ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.rx.free), + odph_ring_free_count(pktio_entry->s.ipc.rx.free)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_prod", dev); - pktio_entry->s.ipc.s.prod = _ipc_shm_map(ipc_shm_name, ring_size); - if (!pktio_entry->s.ipc.s.prod) { + pktio_entry->s.ipc.tx.send = _ipc_shm_map(ipc_shm_name, ring_size); + if (!pktio_entry->s.ipc.tx.send) { ODP_DBG("pid %d unable to find ipc ring %s name\n", getpid(), dev); goto free_m_cons; } ODP_DBG("Connected IPC ring: %s, count %d, free %d\n", - ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.s.prod), - odph_ring_free_count(pktio_entry->s.ipc.s.prod)); + ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.tx.send), + odph_ring_free_count(pktio_entry->s.ipc.tx.send)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_cons", dev); - pktio_entry->s.ipc.s.cons = _ipc_shm_map(ipc_shm_name, ring_size); - if (!pktio_entry->s.ipc.s.cons) { + pktio_entry->s.ipc.tx.free = _ipc_shm_map(ipc_shm_name, ring_size); + if (!pktio_entry->s.ipc.tx.free) { ODP_DBG("pid %d unable to find ipc ring %s name\n", getpid(), dev); goto free_s_prod; } ODP_DBG("Connected IPC ring: %s, count %d, free %d\n", - ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.s.cons), - odph_ring_free_count(pktio_entry->s.ipc.s.cons)); + ipc_shm_name, odph_ring_count(pktio_entry->s.ipc.tx.free), + odph_ring_free_count(pktio_entry->s.ipc.tx.free)); /* Get info about remote pool */ pinfo = pktio_entry->s.ipc.pinfo; @@ -375,13 +368,6 @@ static int _ipc_slave_post_init(pktio_entry_t *pktio_entry) */ _odp_ipc_export_pool(pinfo, pktio_entry->s.ipc.pool); - /* recv() rings */ - pktio_entry->s.ipc.recv.r = pktio_entry->s.ipc.m.prod; - pktio_entry->s.ipc.recv.r_p = pktio_entry->s.ipc.m.cons; - /* tx() rings */ - pktio_entry->s.ipc.tx.r = pktio_entry->s.ipc.s.prod; - pktio_entry->s.ipc.tx.r_p = pktio_entry->s.ipc.s.cons; - odp_atomic_store_u32(&pktio_entry->s.ipc.ready, 1); ODP_DBG("Post init... DONE.\n"); @@ -497,7 +483,7 @@ int ipc_pktio_recv(pktio_entry_t *pktio_entry, odph_ring_t *tx_r_p; rbuf_p = (void *)&r_p_pkts; - tx_r_p = pktio_entry->s.ipc.tx.r_p; + tx_r_p = pktio_entry->s.ipc.tx.free; ret = odph_ring_mc_dequeue_burst(tx_r_p, rbuf_p, PKTIO_IPC_ENTRIES); if (0 == ret) @@ -508,7 +494,7 @@ int ipc_pktio_recv(pktio_entry_t *pktio_entry, } } - r = pktio_entry->s.ipc.recv.r; + r = pktio_entry->s.ipc.rx.recv; pkts = odph_ring_mc_dequeue_burst(r, ipcbufs_p, len); if (odp_unlikely(pkts < 0)) ODP_ABORT("error to dequeue no packets\n"); @@ -587,7 +573,7 @@ int ipc_pktio_recv(pktio_entry_t *pktio_entry, } /* Now tell other process that we no longer need that buffers.*/ - r_p = pktio_entry->s.ipc.recv.r_p; + r_p = pktio_entry->s.ipc.rx.free; pkts = odph_ring_mp_enqueue_burst(r_p, ipcbufs_p, i); if (odp_unlikely(pkts < 0)) ODP_ABORT("ipc: odp_ring_mp_enqueue_bulk r_p fail\n"); @@ -613,7 +599,7 @@ int ipc_pktio_send(pktio_entry_t *pktio_entry, odp_packet_t pkt_table[], odph_ring_t *r_p; rbuf_p = (void *)&r_p_pkts; - r_p = pktio_entry->s.ipc.tx.r_p; + r_p = pktio_entry->s.ipc.tx.free; ret = odph_ring_mc_dequeue_burst(r_p, rbuf_p, PKTIO_IPC_ENTRIES); if (0 == ret) @@ -661,7 +647,7 @@ int ipc_pktio_send(pktio_entry_t *pktio_entry, odp_packet_t pkt_table[], /* Put packets to ring to be processed in other process. */ rbuf_p = (void *)&pkt_table[0]; - r = pktio_entry->s.ipc.tx.r; + r = pktio_entry->s.ipc.tx.send; ret = odph_ring_mp_enqueue_burst(r, rbuf_p, len); if (odp_unlikely(ret < 0)) { ODP_ERR("pid %d odp_ring_mp_enqueue_bulk fail, ipc_slave %d, ret %d\n",