From patchwork Tue Apr 1 19:01:12 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Murali Karicheri X-Patchwork-Id: 27570 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ie0-f198.google.com (mail-ie0-f198.google.com [209.85.223.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 566A620553 for ; Tue, 1 Apr 2014 19:01:33 +0000 (UTC) Received: by mail-ie0-f198.google.com with SMTP id to1sf36598358ieb.1 for ; Tue, 01 Apr 2014 12:01:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:cc:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :sender:errors-to:x-original-sender :x-original-authentication-results:mailing-list:content-type :content-transfer-encoding; bh=XJk8ToAWKXLQDiwQc60JGxI19pM1JBD7q2C2EISw32w=; b=EDCIOTfcmwUrC0TiIdXiz6iU5sTAPAdTs6K8jRAeGNDoAqPYnjbHXkg5QFEqfp8cDA CIAsNJ0E1TLjUAr8cMPfCTX5sLafNjo5Th5/Vj/VHVq3O2mAtGJSVgqdQ3k1V2Q+MMRe opq1yPNS+wp5ytXnJr0yJIAUWGIRNjSUrFDUcPXOw79gcH7CR2ZLG2YnKUVKnwPjv1Xl OXMUBoZomrOIs3YGHNnlJVzE7M64NbAoofAeKYXJ7hDdZ65EumIQ3bWrEiDX9jiOLEzA 0e7ZNyxV76TOWzWlObBUnozBmefh1ApZr5g8mFiK8uMaeR8SkJX2jlqrvb+0rHahsQUy vdfw== X-Gm-Message-State: ALoCoQkPMHcTmYPxKlq0xL9Gx08GUg99fgS13YA0cn83+RXLeVN5suGZHXs00GR3JLBPKQMn3WUd X-Received: by 10.50.73.74 with SMTP id j10mr1767839igv.1.1396378893179; Tue, 01 Apr 2014 12:01:33 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.86.170 with SMTP id p39ls93476qgd.49.gmail; Tue, 01 Apr 2014 12:01:33 -0700 (PDT) X-Received: by 10.52.173.165 with SMTP id bl5mr24945089vdc.13.1396378893055; Tue, 01 Apr 2014 12:01:33 -0700 (PDT) Received: from mail-vc0-f171.google.com (mail-vc0-f171.google.com [209.85.220.171]) by mx.google.com with ESMTPS id bj3si3869476vcb.64.2014.04.01.12.01.33 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 01 Apr 2014 12:01:33 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.171; Received: by mail-vc0-f171.google.com with SMTP id lg15so10632648vcb.30 for ; Tue, 01 Apr 2014 12:01:33 -0700 (PDT) X-Received: by 10.52.78.231 with SMTP id e7mr2098507vdx.28.1396378892887; Tue, 01 Apr 2014 12:01:32 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.12.8 with SMTP id v8csp262353vcv; Tue, 1 Apr 2014 12:01:32 -0700 (PDT) X-Received: by 10.204.110.11 with SMTP id l11mr337186bkp.119.1396378891734; Tue, 01 Apr 2014 12:01:31 -0700 (PDT) Received: from theia.denx.de (theia.denx.de. [85.214.87.163]) by mx.google.com with ESMTP id ym2si9626595bkb.46.2014.04.01.12.01.30 for ; Tue, 01 Apr 2014 12:01:31 -0700 (PDT) Received-SPF: pass (google.com: domain of u-boot-bounces@lists.denx.de designates 85.214.87.163 as permitted sender) client-ip=85.214.87.163; Received: from localhost (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id 271D14B821; Tue, 1 Apr 2014 21:01:30 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at theia.denx.de Received: from theia.denx.de ([127.0.0.1]) by localhost (theia.denx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 5qeVpxFvFhvl; Tue, 1 Apr 2014 21:01:29 +0200 (CEST) Received: from theia.denx.de (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id EE6B54B829; Tue, 1 Apr 2014 21:01:27 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id A7C4A4B83C for ; Tue, 1 Apr 2014 21:01:24 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at theia.denx.de Received: from theia.denx.de ([127.0.0.1]) by localhost (theia.denx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4CV519l+bihK for ; Tue, 1 Apr 2014 21:01:21 +0200 (CEST) X-policyd-weight: NOT_IN_SBL_XBL_SPAMHAUS=-1.5 NOT_IN_SPAMCOP=-1.5 NOT_IN_BL_NJABL=-1.5 (only DNSBL check requested) Received: from bear.ext.ti.com (bear.ext.ti.com [192.94.94.41]) by theia.denx.de (Postfix) with ESMTPS id 303874B813 for ; Tue, 1 Apr 2014 21:01:16 +0200 (CEST) Received: from dflxv15.itg.ti.com ([128.247.5.124]) by bear.ext.ti.com (8.13.7/8.13.7) with ESMTP id s31J1Env017717 for ; Tue, 1 Apr 2014 14:01:14 -0500 Received: from DLEE70.ent.ti.com (dlemailx.itg.ti.com [157.170.170.113]) by dflxv15.itg.ti.com (8.14.3/8.13.8) with ESMTP id s31J1EQd008343 for ; Tue, 1 Apr 2014 14:01:14 -0500 Received: from dflp33.itg.ti.com (10.64.6.16) by DLEE70.ent.ti.com (157.170.170.113) with Microsoft SMTP Server id 14.3.174.1; Tue, 1 Apr 2014 14:01:14 -0500 Received: from ares-ubuntu.am.dhcp.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by dflp33.itg.ti.com (8.14.3/8.13.8) with ESMTP id s31J1Ek6013537; Tue, 1 Apr 2014 14:01:14 -0500 Received: from a0868495 by ares-ubuntu.am.dhcp.ti.com with local (Exim 4.76) (envelope-from ) id 1WV3vu-0004S8-35; Tue, 01 Apr 2014 15:01:14 -0400 From: Murali Karicheri To: , Date: Tue, 1 Apr 2014 15:01:12 -0400 Message-ID: <1396378873-17080-2-git-send-email-m-karicheri2@ti.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1396378873-17080-1-git-send-email-m-karicheri2@ti.com> References: <1396378873-17080-1-git-send-email-m-karicheri2@ti.com> MIME-Version: 1.0 Cc: WingMan Kwok Subject: [U-Boot] [PATCH v4 1/2] keystone2: add keystone multicore navigator driver X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.11 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Sender: u-boot-bounces@lists.denx.de Errors-To: u-boot-bounces@lists.denx.de X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: m-karicheri2@ti.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: Vitaly Andrianov Multicore navigator consists of Network Coprocessor (NetCP) and Queue Manager sub system. More details on the hardware can be obtained from the following links:- Network Coprocessor: http://www.ti.com/lit/pdf/sprugz6 Multicore Navigator: http://www.ti.com/lit/pdf/sprugr9 Multicore navigator driver implements APIs to configure the Queue Manager and NetCP Pkt DMA. Signed-off-by: Vitaly Andrianov Signed-off-by: Murali Karicheri Signed-off-by: WingMan Kwok Acked-by: Tom Rini --- arch/arm/cpu/armv7/keystone/Makefile | 1 + arch/arm/cpu/armv7/keystone/keystone_nav.c | 376 +++++++++++++++++++++ arch/arm/include/asm/arch-keystone/keystone_nav.h | 193 +++++++++++ 3 files changed, 570 insertions(+) create mode 100644 arch/arm/cpu/armv7/keystone/keystone_nav.c create mode 100644 arch/arm/include/asm/arch-keystone/keystone_nav.h diff --git a/arch/arm/cpu/armv7/keystone/Makefile b/arch/arm/cpu/armv7/keystone/Makefile index 7924cfa..7d0374b 100644 --- a/arch/arm/cpu/armv7/keystone/Makefile +++ b/arch/arm/cpu/armv7/keystone/Makefile @@ -11,6 +11,7 @@ obj-y += psc.o obj-y += clock.o obj-y += cmd_clock.o obj-y += cmd_mon.o +obj-y += keystone_nav.o obj-y += msmc.o obj-$(CONFIG_SPL_BUILD) += spl.o obj-y += ddr3.o diff --git a/arch/arm/cpu/armv7/keystone/keystone_nav.c b/arch/arm/cpu/armv7/keystone/keystone_nav.c new file mode 100644 index 0000000..39d6f99 --- /dev/null +++ b/arch/arm/cpu/armv7/keystone/keystone_nav.c @@ -0,0 +1,376 @@ +/* + * Multicore Navigator driver for TI Keystone 2 devices. + * + * (C) Copyright 2012-2014 + * Texas Instruments Incorporated, + * + * SPDX-License-Identifier: GPL-2.0+ + */ +#include +#include +#include + +static int soc_type = +#ifdef CONFIG_SOC_K2HK + k2hk; +#endif + +struct qm_config k2hk_qm_memmap = { + .stat_cfg = 0x02a40000, + .queue = (struct qm_reg_queue *)0x02a80000, + .mngr_vbusm = 0x23a80000, + .i_lram = 0x00100000, + .proxy = (struct qm_reg_queue *)0x02ac0000, + .status_ram = 0x02a06000, + .mngr_cfg = (struct qm_cfg_reg *)0x02a02000, + .intd_cfg = 0x02a0c000, + .desc_mem = (struct descr_mem_setup_reg *)0x02a03000, + .region_num = 64, + .pdsp_cmd = 0x02a20000, + .pdsp_ctl = 0x02a0f000, + .pdsp_iram = 0x02a10000, + .qpool_num = 4000, +}; + +/* + * We are going to use only one type of descriptors - host packet + * descriptors. We staticaly allocate memory for them here + */ +struct qm_host_desc desc_pool[HDESC_NUM] __aligned(sizeof(struct qm_host_desc)); + +static struct qm_config *qm_cfg; + +inline int num_of_desc_to_reg(int num_descr) +{ + int j, num; + + for (j = 0, num = 32; j < 15; j++, num *= 2) { + if (num_descr <= num) + return j; + } + + return 15; +} + +static int _qm_init(struct qm_config *cfg) +{ + u32 j; + + if (cfg == NULL) + return QM_ERR; + + qm_cfg = cfg; + + qm_cfg->mngr_cfg->link_ram_base0 = qm_cfg->i_lram; + qm_cfg->mngr_cfg->link_ram_size0 = HDESC_NUM * 8; + qm_cfg->mngr_cfg->link_ram_base1 = 0; + qm_cfg->mngr_cfg->link_ram_size1 = 0; + qm_cfg->mngr_cfg->link_ram_base2 = 0; + + qm_cfg->desc_mem[0].base_addr = (u32)desc_pool; + qm_cfg->desc_mem[0].start_idx = 0; + qm_cfg->desc_mem[0].desc_reg_size = + (((sizeof(struct qm_host_desc) >> 4) - 1) << 16) | + num_of_desc_to_reg(HDESC_NUM); + + memset(desc_pool, 0, sizeof(desc_pool)); + for (j = 0; j < HDESC_NUM; j++) + qm_push(&desc_pool[j], qm_cfg->qpool_num); + + return QM_OK; +} + +int qm_init(void) +{ + switch (soc_type) { + case k2hk: + return _qm_init(&k2hk_qm_memmap); + } + + return QM_ERR; +} + +void qm_close(void) +{ + u32 j; + + if (qm_cfg == NULL) + return; + + queue_close(qm_cfg->qpool_num); + + qm_cfg->mngr_cfg->link_ram_base0 = 0; + qm_cfg->mngr_cfg->link_ram_size0 = 0; + qm_cfg->mngr_cfg->link_ram_base1 = 0; + qm_cfg->mngr_cfg->link_ram_size1 = 0; + qm_cfg->mngr_cfg->link_ram_base2 = 0; + + for (j = 0; j < qm_cfg->region_num; j++) { + qm_cfg->desc_mem[j].base_addr = 0; + qm_cfg->desc_mem[j].start_idx = 0; + qm_cfg->desc_mem[j].desc_reg_size = 0; + } + + qm_cfg = NULL; +} + +void qm_push(struct qm_host_desc *hd, u32 qnum) +{ + u32 regd; + + if (!qm_cfg) + return; + + cpu_to_bus((u32 *)hd, sizeof(struct qm_host_desc)/4); + regd = (u32)hd | ((sizeof(struct qm_host_desc) >> 4) - 1); + writel(regd, &qm_cfg->queue[qnum].ptr_size_thresh); +} + +void qm_buff_push(struct qm_host_desc *hd, u32 qnum, + void *buff_ptr, u32 buff_len) +{ + hd->orig_buff_len = buff_len; + hd->buff_len = buff_len; + hd->orig_buff_ptr = (u32)buff_ptr; + hd->buff_ptr = (u32)buff_ptr; + qm_push(hd, qnum); +} + +struct qm_host_desc *qm_pop(u32 qnum) +{ + u32 uhd; + + if (!qm_cfg) + return NULL; + + uhd = readl(&qm_cfg->queue[qnum].ptr_size_thresh) & ~0xf; + if (uhd) + cpu_to_bus((u32 *)uhd, sizeof(struct qm_host_desc)/4); + + return (struct qm_host_desc *)uhd; +} + +struct qm_host_desc *qm_pop_from_free_pool(void) +{ + if (!qm_cfg) + return NULL; + + return qm_pop(qm_cfg->qpool_num); +} + +void queue_close(u32 qnum) +{ + struct qm_host_desc *hd; + + while ((hd = qm_pop(qnum))) + ; +} + +/* + * DMA API + */ + +struct pktdma_cfg k2hk_netcp_pktdma = { + .global = (struct global_ctl_regs *)0x02004000, + .tx_ch = (struct tx_chan_regs *)0x02004400, + .tx_ch_num = 9, + .rx_ch = (struct rx_chan_regs *)0x02004800, + .rx_ch_num = 26, + .tx_sched = (u32 *)0x02004c00, + .rx_flows = (struct rx_flow_regs *)0x02005000, + .rx_flow_num = 32, + .rx_free_q = 4001, + .rx_rcv_q = 4002, + .tx_snd_q = 648, +}; + +struct pktdma_cfg *netcp; + +static int netcp_rx_disable(void) +{ + u32 j, v, k; + + for (j = 0; j < netcp->rx_ch_num; j++) { + v = readl(&netcp->rx_ch[j].cfg_a); + if (!(v & CPDMA_CHAN_A_ENABLE)) + continue; + + writel(v | CPDMA_CHAN_A_TDOWN, &netcp->rx_ch[j].cfg_a); + for (k = 0; k < TDOWN_TIMEOUT_COUNT; k++) { + udelay(100); + v = readl(&netcp->rx_ch[j].cfg_a); + if (!(v & CPDMA_CHAN_A_ENABLE)) + continue; + } + /* TODO: teardown error on if TDOWN_TIMEOUT_COUNT is reached */ + } + + /* Clear all of the flow registers */ + for (j = 0; j < netcp->rx_flow_num; j++) { + writel(0, &netcp->rx_flows[j].control); + writel(0, &netcp->rx_flows[j].tags); + writel(0, &netcp->rx_flows[j].tag_sel); + writel(0, &netcp->rx_flows[j].fdq_sel[0]); + writel(0, &netcp->rx_flows[j].fdq_sel[1]); + writel(0, &netcp->rx_flows[j].thresh[0]); + writel(0, &netcp->rx_flows[j].thresh[1]); + writel(0, &netcp->rx_flows[j].thresh[2]); + } + + return QM_OK; +} + +static int netcp_tx_disable(void) +{ + u32 j, v, k; + + for (j = 0; j < netcp->tx_ch_num; j++) { + v = readl(&netcp->tx_ch[j].cfg_a); + if (!(v & CPDMA_CHAN_A_ENABLE)) + continue; + + writel(v | CPDMA_CHAN_A_TDOWN, &netcp->tx_ch[j].cfg_a); + for (k = 0; k < TDOWN_TIMEOUT_COUNT; k++) { + udelay(100); + v = readl(&netcp->tx_ch[j].cfg_a); + if (!(v & CPDMA_CHAN_A_ENABLE)) + continue; + } + /* TODO: teardown error on if TDOWN_TIMEOUT_COUNT is reached */ + } + + return QM_OK; +} + +static int _netcp_init(struct pktdma_cfg *netcp_cfg, + struct rx_buff_desc *rx_buffers) +{ + u32 j, v; + struct qm_host_desc *hd; + u8 *rx_ptr; + + if (netcp_cfg == NULL || rx_buffers == NULL || + rx_buffers->buff_ptr == NULL || qm_cfg == NULL) + return QM_ERR; + + netcp = netcp_cfg; + netcp->rx_flow = rx_buffers->rx_flow; + + /* init rx queue */ + rx_ptr = rx_buffers->buff_ptr; + + for (j = 0; j < rx_buffers->num_buffs; j++) { + hd = qm_pop(qm_cfg->qpool_num); + if (hd == NULL) + return QM_ERR; + + qm_buff_push(hd, netcp->rx_free_q, + rx_ptr, rx_buffers->buff_len); + + rx_ptr += rx_buffers->buff_len; + } + + netcp_rx_disable(); + + /* configure rx channels */ + v = CPDMA_REG_VAL_MAKE_RX_FLOW_A(1, 1, 0, 0, 0, 0, 0, netcp->rx_rcv_q); + writel(v, &netcp->rx_flows[netcp->rx_flow].control); + writel(0, &netcp->rx_flows[netcp->rx_flow].tags); + writel(0, &netcp->rx_flows[netcp->rx_flow].tag_sel); + + v = CPDMA_REG_VAL_MAKE_RX_FLOW_D(0, netcp->rx_free_q, 0, + netcp->rx_free_q); + + writel(v, &netcp->rx_flows[netcp->rx_flow].fdq_sel[0]); + writel(v, &netcp->rx_flows[netcp->rx_flow].fdq_sel[1]); + writel(0, &netcp->rx_flows[netcp->rx_flow].thresh[0]); + writel(0, &netcp->rx_flows[netcp->rx_flow].thresh[1]); + writel(0, &netcp->rx_flows[netcp->rx_flow].thresh[2]); + + for (j = 0; j < netcp->rx_ch_num; j++) + writel(CPDMA_CHAN_A_ENABLE, &netcp->rx_ch[j].cfg_a); + + /* configure tx channels */ + /* Disable loopback in the tx direction */ + writel(0, &netcp->global->emulation_control); + +/* TODO: make it dependend on a soc type variable */ +#ifdef CONFIG_SOC_K2HK + /* Set QM base address, only for K2x devices */ + writel(0x23a80000, &netcp->global->qm_base_addr[0]); +#endif + + /* Enable all channels. The current state isn't important */ + for (j = 0; j < netcp->tx_ch_num; j++) { + writel(0, &netcp->tx_ch[j].cfg_b); + writel(CPDMA_CHAN_A_ENABLE, &netcp->tx_ch[j].cfg_a); + } + + return QM_OK; +} + +int netcp_init(struct rx_buff_desc *rx_buffers) +{ + switch (soc_type) { + case k2hk: + _netcp_init(&k2hk_netcp_pktdma, rx_buffers); + return QM_OK; + } + return QM_ERR; +} + +int netcp_close(void) +{ + if (!netcp) + return QM_ERR; + + netcp_tx_disable(); + netcp_rx_disable(); + + queue_close(netcp->rx_free_q); + queue_close(netcp->rx_rcv_q); + queue_close(netcp->tx_snd_q); + + return QM_OK; +} + +int netcp_send(u32 *pkt, int num_bytes, u32 swinfo2) +{ + struct qm_host_desc *hd; + + hd = qm_pop(qm_cfg->qpool_num); + if (hd == NULL) + return QM_ERR; + + hd->desc_info = num_bytes; + hd->swinfo[2] = swinfo2; + hd->packet_info = qm_cfg->qpool_num; + + qm_buff_push(hd, netcp->tx_snd_q, pkt, num_bytes); + + return QM_OK; +} + +void *netcp_recv(u32 **pkt, int *num_bytes) +{ + struct qm_host_desc *hd; + + hd = qm_pop(netcp->rx_rcv_q); + if (!hd) + return NULL; + + *pkt = (u32 *)hd->buff_ptr; + *num_bytes = hd->desc_info & 0x3fffff; + + return hd; +} + +void netcp_release_rxhd(void *hd) +{ + struct qm_host_desc *_hd = (struct qm_host_desc *)hd; + + _hd->buff_len = _hd->orig_buff_len; + _hd->buff_ptr = _hd->orig_buff_ptr; + + qm_push(_hd, netcp->rx_free_q); +} diff --git a/arch/arm/include/asm/arch-keystone/keystone_nav.h b/arch/arm/include/asm/arch-keystone/keystone_nav.h new file mode 100644 index 0000000..ab81eaf --- /dev/null +++ b/arch/arm/include/asm/arch-keystone/keystone_nav.h @@ -0,0 +1,193 @@ +/* + * Multicore Navigator definitions + * + * (C) Copyright 2012-2014 + * Texas Instruments Incorporated, + * + * SPDX-License-Identifier: GPL-2.0+ + */ + +#ifndef _KEYSTONE_NAV_H_ +#define _KEYSTONE_NAV_H_ + +#include +#include + +enum soc_type_t { + k2hk +}; + +#define QM_OK 0 +#define QM_ERR -1 +#define QM_DESC_TYPE_HOST 0 +#define QM_DESC_PSINFO_IN_DESCR 0 +#define QM_DESC_DEFAULT_DESCINFO (QM_DESC_TYPE_HOST << 30) | \ + (QM_DESC_PSINFO_IN_DESCR << 22) + +/* Packet Info */ +#define QM_DESC_PINFO_EPIB 1 +#define QM_DESC_PINFO_RETURN_OWN 1 +#define QM_DESC_DEFAULT_PINFO (QM_DESC_PINFO_EPIB << 31) | \ + (QM_DESC_PINFO_RETURN_OWN << 15) + +struct qm_cfg_reg { + u32 revision; + u32 __pad1; + u32 divert; + u32 link_ram_base0; + u32 link_ram_size0; + u32 link_ram_base1; + u32 link_ram_size1; + u32 link_ram_base2; + u32 starvation[0]; +}; + +struct descr_mem_setup_reg { + u32 base_addr; + u32 start_idx; + u32 desc_reg_size; + u32 _res0; +}; + +struct qm_reg_queue { + u32 entry_count; + u32 byte_count; + u32 packet_size; + u32 ptr_size_thresh; +}; + +struct qm_config { + /* QM module addresses */ + u32 stat_cfg; /* status and config */ + struct qm_reg_queue *queue; /* management region */ + u32 mngr_vbusm; /* management region (VBUSM) */ + u32 i_lram; /* internal linking RAM */ + struct qm_reg_queue *proxy; + u32 status_ram; + struct qm_cfg_reg *mngr_cfg; + /* Queue manager config region */ + u32 intd_cfg; /* QMSS INTD config region */ + struct descr_mem_setup_reg *desc_mem; + /* descritor memory setup region*/ + u32 region_num; + u32 pdsp_cmd; /* PDSP1 command interface */ + u32 pdsp_ctl; /* PDSP1 control registers */ + u32 pdsp_iram; + /* QM configuration parameters */ + + u32 qpool_num; /* */ +}; + +struct qm_host_desc { + u32 desc_info; + u32 tag_info; + u32 packet_info; + u32 buff_len; + u32 buff_ptr; + u32 next_bdptr; + u32 orig_buff_len; + u32 orig_buff_ptr; + u32 timestamp; + u32 swinfo[3]; + u32 ps_data[20]; +}; + +#define HDESC_NUM 256 + +int qm_init(void); +void qm_close(void); +void qm_push(struct qm_host_desc *hd, u32 qnum); +struct qm_host_desc *qm_pop(u32 qnum); + +void qm_buff_push(struct qm_host_desc *hd, u32 qnum, + void *buff_ptr, u32 buff_len); + +struct qm_host_desc *qm_pop_from_free_pool(void); +void queue_close(u32 qnum); + +/* + * DMA API + */ +#define CPDMA_REG_VAL_MAKE_RX_FLOW_A(einfo, psinfo, rxerr, desc, \ + psloc, sopoff, qmgr, qnum) \ + (((einfo & 1) << 30) | \ + ((psinfo & 1) << 29) | \ + ((rxerr & 1) << 28) | \ + ((desc & 3) << 26) | \ + ((psloc & 1) << 25) | \ + ((sopoff & 0x1ff) << 16) | \ + ((qmgr & 3) << 12) | \ + ((qnum & 0xfff) << 0)) + +#define CPDMA_REG_VAL_MAKE_RX_FLOW_D(fd0qm, fd0qnum, fd1qm, fd1qnum) \ + (((fd0qm & 3) << 28) | \ + ((fd0qnum & 0xfff) << 16) | \ + ((fd1qm & 3) << 12) | \ + ((fd1qnum & 0xfff) << 0)) + +#define CPDMA_CHAN_A_ENABLE ((u32)1 << 31) +#define CPDMA_CHAN_A_TDOWN (1 << 30) +#define TDOWN_TIMEOUT_COUNT 100 + +struct global_ctl_regs { + u32 revision; + u32 perf_control; + u32 emulation_control; + u32 priority_control; + u32 qm_base_addr[4]; +}; + +struct tx_chan_regs { + u32 cfg_a; + u32 cfg_b; + u32 res[6]; +}; + +struct rx_chan_regs { + u32 cfg_a; + u32 res[7]; +}; + +struct rx_flow_regs { + u32 control; + u32 tags; + u32 tag_sel; + u32 fdq_sel[2]; + u32 thresh[3]; +}; + +struct pktdma_cfg { + struct global_ctl_regs *global; + struct tx_chan_regs *tx_ch; + u32 tx_ch_num; + struct rx_chan_regs *rx_ch; + u32 rx_ch_num; + u32 *tx_sched; + struct rx_flow_regs *rx_flows; + u32 rx_flow_num; + + u32 rx_free_q; + u32 rx_rcv_q; + u32 tx_snd_q; + + u32 rx_flow; /* flow that is used for RX */ +}; + +/* + * packet dma user allocates memory for rx buffers + * and describe it in the following structure + */ +struct rx_buff_desc { + u8 *buff_ptr; + u32 num_buffs; + u32 buff_len; + u32 rx_flow; +}; + +int netcp_close(void); +int netcp_init(struct rx_buff_desc *rx_buffers); +int netcp_send(u32 *pkt, int num_bytes, u32 swinfo2); +void *netcp_recv(u32 **pkt, int *num_bytes); +void netcp_release_rxhd(void *hd); + +#endif /* _KEYSTONE_NAV_H_ */