From patchwork Thu Apr 24 17:01:38 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taras Kondratiuk X-Patchwork-Id: 29007 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f72.google.com (mail-oa0-f72.google.com [209.85.219.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1121B2036A for ; Thu, 24 Apr 2014 17:02:43 +0000 (UTC) Received: by mail-oa0-f72.google.com with SMTP id eb12sf15313279oac.11 for ; Thu, 24 Apr 2014 10:02:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:cc:subject:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list:content-type :content-transfer-encoding; bh=xD8N8C6KAByHr4pu4nS0zJOWqENa0/k8IjVvejT2O0Y=; b=EhioxoqBD98PyE9vNvNFje0/sFikJvQ539rwk0dUQ6HQHz+2bwj1iom2Ha21tHIDVD ZzU4dn+i3QOnyc//0OrNDjddWDUN9igDNktddmuhNvcnjRUps9JZMQrWpw3+1K12aVEE jI3LF90kXKvV+JuohqVhB/AtqEKoPQCfMdL1CANdMAm+cKe/vgignKU7TpYjqA3+1DEs lRt1dWFh6Lm6Fjudh64R9KDTjrbh7PfdVVxUku78d8lonVT9lCp6/Q7hpC/vPV8xwcms UG0eCEREkKv3kmq5fbNZ9ymlT5XIst5GbuC3gQp/f4/j665oOqct/kKnxDAXKx8q/o2q lJBQ== X-Gm-Message-State: ALoCoQm6H6sKU3mQ9wJcR6rLjqfLa7sjPfBHETSYg5CIoKf+5ZAagFppppdYbzQ2rE59MUI1Nzfc X-Received: by 10.182.22.133 with SMTP id d5mr1757867obf.27.1398358963525; Thu, 24 Apr 2014 10:02:43 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.89.70 with SMTP id u64ls1184655qgd.80.gmail; Thu, 24 Apr 2014 10:02:43 -0700 (PDT) X-Received: by 10.52.191.100 with SMTP id gx4mr1660793vdc.4.1398358963349; Thu, 24 Apr 2014 10:02:43 -0700 (PDT) Received: from mail-ve0-f177.google.com (mail-ve0-f177.google.com [209.85.128.177]) by mx.google.com with ESMTPS id sn5si1068376vdc.29.2014.04.24.10.02.43 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 24 Apr 2014 10:02:43 -0700 (PDT) Received-SPF: none (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) client-ip=209.85.128.177; Received: by mail-ve0-f177.google.com with SMTP id sa20so3242041veb.36 for ; Thu, 24 Apr 2014 10:02:43 -0700 (PDT) X-Received: by 10.58.46.207 with SMTP id x15mr1982365vem.17.1398358963239; Thu, 24 Apr 2014 10:02:43 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp31760vcb; Thu, 24 Apr 2014 10:02:42 -0700 (PDT) X-Received: by 10.224.60.71 with SMTP id o7mr4587080qah.38.1398358962457; Thu, 24 Apr 2014 10:02:42 -0700 (PDT) Received: from ip-10-141-164-156.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id i96si2529887qge.53.2014.04.24.10.02.41 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 24 Apr 2014 10:02:42 -0700 (PDT) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-141-164-156.ec2.internal) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1WdN2R-0004JJ-3Y; Thu, 24 Apr 2014 17:02:19 +0000 Received: from mail-ee0-f49.google.com ([74.125.83.49]) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1WdN1o-0004GU-2M for lng-odp@lists.linaro.org; Thu, 24 Apr 2014 17:01:40 +0000 Received: by mail-ee0-f49.google.com with SMTP id c41so2078548eek.36 for ; Thu, 24 Apr 2014 10:01:56 -0700 (PDT) X-Received: by 10.14.109.201 with SMTP id s49mr3674911eeg.88.1398358916823; Thu, 24 Apr 2014 10:01:56 -0700 (PDT) Received: from uglx0153363.synapse.com ([195.238.92.128]) by mx.google.com with ESMTPSA id w46sm17371207eeo.35.2014.04.24.10.01.55 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 24 Apr 2014 10:01:56 -0700 (PDT) From: Taras Kondratiuk To: lng-odp@lists.linaro.org Date: Thu, 24 Apr 2014 20:01:38 +0300 Message-Id: <1398358899-9851-8-git-send-email-taras.kondratiuk@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1398358899-9851-1-git-send-email-taras.kondratiuk@linaro.org> References: <1398358899-9851-1-git-send-email-taras.kondratiuk@linaro.org> Cc: linaro-networking@linaro.org Subject: [lng-odp] [PATCH v4 7/8] Keystone2: Add initial Packet IO X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: taras.kondratiuk@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Add simple Packet IO implementation. Packet accelerator is not used, so packets are sent directly to Ethernet switch ports. Signed-off-by: Taras Kondratiuk --- .../linux-keystone2/include/odp_buffer_internal.h | 2 - .../include/odp_buffer_pool_internal.h | 1 + .../linux-keystone2/include/odp_packet_internal.h | 141 +++++++ .../include/odp_packet_io_internal.h | 54 +++ .../linux-keystone2/include/odp_queue_internal.h | 1 + platform/linux-keystone2/source/odp_buffer.c | 14 +- platform/linux-keystone2/source/odp_buffer_pool.c | 10 +- platform/linux-keystone2/source/odp_init.c | 4 +- platform/linux-keystone2/source/odp_packet.c | 33 +- platform/linux-keystone2/source/odp_packet_io.c | 436 ++++++++++++++++++++ platform/linux-keystone2/source/odp_queue.c | 8 +- 11 files changed, 674 insertions(+), 30 deletions(-) create mode 100644 platform/linux-keystone2/include/odp_packet_internal.h create mode 100644 platform/linux-keystone2/include/odp_packet_io_internal.h create mode 100644 platform/linux-keystone2/source/odp_packet_io.c diff --git a/platform/linux-keystone2/include/odp_buffer_internal.h b/platform/linux-keystone2/include/odp_buffer_internal.h index b830e12..3973b5c 100644 --- a/platform/linux-keystone2/include/odp_buffer_internal.h +++ b/platform/linux-keystone2/include/odp_buffer_internal.h @@ -60,8 +60,6 @@ typedef struct odp_buffer_hdr_t { void *buf_vaddr; uint32_t free_queue; int type; - struct odp_buffer_hdr_t *next; /* next buf in a list */ - odp_buffer_bits_t handle; /* handle */ } odp_buffer_hdr_t; diff --git a/platform/linux-keystone2/include/odp_buffer_pool_internal.h b/platform/linux-keystone2/include/odp_buffer_pool_internal.h index a77331c..6394a8b 100644 --- a/platform/linux-keystone2/include/odp_buffer_pool_internal.h +++ b/platform/linux-keystone2/include/odp_buffer_pool_internal.h @@ -72,6 +72,7 @@ static inline void *get_pool_entry(odp_buffer_pool_t pool_id) { return pool_entry_ptr[pool_id]; } +uint32_t _odp_pool_get_free_queue(odp_buffer_pool_t pool_id); #ifdef __cplusplus } diff --git a/platform/linux-keystone2/include/odp_packet_internal.h b/platform/linux-keystone2/include/odp_packet_internal.h new file mode 100644 index 0000000..8ccf705 --- /dev/null +++ b/platform/linux-keystone2/include/odp_packet_internal.h @@ -0,0 +1,141 @@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + + +/** + * @file + * + * ODP packet descriptor - implementation internal + */ + +#ifndef ODP_PACKET_INTERNAL_H_ +#define ODP_PACKET_INTERNAL_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#include +#include +#include +#include + +/** + * Packet input & protocol flags + */ +typedef union { + /* All input flags */ + uint32_t all; + + struct { + /* Bitfield flags for each protocol */ + uint32_t l2:1; /**< known L2 protocol present */ + uint32_t l3:1; /**< known L3 protocol present */ + uint32_t l4:1; /**< known L4 protocol present */ + + uint32_t eth:1; /**< Ethernet */ + uint32_t jumbo:1; /**< Jumbo frame */ + uint32_t vlan:1; /**< VLAN hdr found */ + uint32_t vlan_qinq:1; /**< Stacked VLAN found, QinQ */ + + uint32_t arp:1; /**< ARP */ + + uint32_t ipv4:1; /**< IPv4 */ + uint32_t ipv6:1; /**< IPv6 */ + uint32_t ipfrag:1; /**< IP fragment */ + uint32_t ipopt:1; /**< IP optional headers */ + uint32_t ipsec:1; /**< IPSec decryption may be needed */ + + uint32_t udp:1; /**< UDP */ + uint32_t tcp:1; /**< TCP */ + uint32_t sctp:1; /**< SCTP */ + uint32_t icmp:1; /**< ICMP */ + }; +} input_flags_t; + +ODP_ASSERT(sizeof(input_flags_t) == sizeof(uint32_t), INPUT_FLAGS_SIZE_ERROR); + +/** + * Packet error flags + */ +typedef union { + /* All error flags */ + uint32_t all; + + struct { + /* Bitfield flags for each detected error */ + uint32_t frame_len:1; /**< Frame length error */ + uint32_t l2_chksum:1; /**< L2 checksum error, checks TBD */ + uint32_t ip_err:1; /**< IP error, checks TBD */ + uint32_t tcp_err:1; /**< TCP error, checks TBD */ + uint32_t udp_err:1; /**< UDP error, checks TBD */ + }; +} error_flags_t; + +ODP_ASSERT(sizeof(error_flags_t) == sizeof(uint32_t), ERROR_FLAGS_SIZE_ERROR); + +/** + * Packet output flags + */ +typedef union { + /* All output flags */ + uint32_t all; + + struct { + /* Bitfield flags for each output option */ + uint32_t l4_chksum:1; /**< Request L4 checksum calculation */ + }; +} output_flags_t; + +ODP_ASSERT(sizeof(output_flags_t) == sizeof(uint32_t), OUTPUT_FLAGS_SIZE_ERROR); + +/** + * Internal Packet header + */ +typedef struct { + /* common buffer header */ + odp_buffer_hdr_t buf_hdr; + + input_flags_t input_flags; + error_flags_t error_flags; + output_flags_t output_flags; + + uint32_t frame_offset; /**< offset to start of frame, even on error */ + uint32_t l2_offset; /**< offset to L2 hdr, e.g. Eth */ + uint32_t l3_offset; /**< offset to L3 hdr, e.g. IPv4, IPv6 */ + uint32_t l4_offset; /**< offset to L4 hdr (TCP, UDP, SCTP, also ICMP) */ + + odp_pktio_t input; + +} odp_packet_hdr_t; + +ODP_ASSERT(sizeof(odp_packet_hdr_t) <= 128, ODP_PACKET_HDR_T_SIZE_ERROR); + +/** + * Return the packet header + */ +static inline odp_packet_hdr_t *odp_packet_hdr(odp_packet_t pkt) +{ + return (odp_packet_hdr_t *)odp_buf_to_hdr((odp_buffer_t)pkt); +} + +static inline odp_packet_hdr_t *odp_bufhdr_to_pkthdr(odp_buffer_hdr_t *hdr) +{ + return (odp_packet_hdr_t *)hdr; +} + +/** + * Parse packet and set internal metadata + */ +void odp_packet_parse(odp_packet_t pkt, size_t len, size_t l2_offset); + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/platform/linux-keystone2/include/odp_packet_io_internal.h b/platform/linux-keystone2/include/odp_packet_io_internal.h new file mode 100644 index 0000000..a8fe5a2 --- /dev/null +++ b/platform/linux-keystone2/include/odp_packet_io_internal.h @@ -0,0 +1,54 @@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + + +/** + * @file + * + * ODP packet IO - implementation internal + */ + +#ifndef ODP_PACKET_IO_INTERNAL_H_ +#define ODP_PACKET_IO_INTERNAL_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#ifdef ODP_HAVE_NETMAP +#include +#endif + +#define PKTIO_DEV_MAX_NAME_LEN 10 +struct pktio_device { + const char name[PKTIO_DEV_MAX_NAME_LEN]; + uint32_t tx_hw_queue; + uint32_t rx_channel; + uint32_t rx_flow; + uint32_t port_id; +}; + +struct pktio_entry { + odp_spinlock_t lock; /**< entry spinlock */ + int taken; /**< is entry taken(1) or free(0) */ + odp_queue_t inq_default; /**< default input queue, if set */ + odp_queue_t outq_default; /**< default out queue */ + odp_buffer_pool_t in_pool; + struct pktio_device *dev; +}; + +typedef union { + struct pktio_entry s; + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct pktio_entry))]; +} pktio_entry_t; + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/platform/linux-keystone2/include/odp_queue_internal.h b/platform/linux-keystone2/include/odp_queue_internal.h index c7c84d6..9a13a00 100644 --- a/platform/linux-keystone2/include/odp_queue_internal.h +++ b/platform/linux-keystone2/include/odp_queue_internal.h @@ -73,6 +73,7 @@ struct queue_entry_s { odp_queue_param_t param; odp_pktio_t pktin; odp_pktio_t pktout; + uint32_t out_port_id; uint32_t hw_queue; char name[ODP_QUEUE_NAME_LEN]; }; diff --git a/platform/linux-keystone2/source/odp_buffer.c b/platform/linux-keystone2/source/odp_buffer.c index 7a50aa2..d4c7cfe 100644 --- a/platform/linux-keystone2/source/odp_buffer.c +++ b/platform/linux-keystone2/source/odp_buffer.c @@ -59,15 +59,16 @@ int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf) len += snprintf(&str[len], n-len, " buf_paddr 0x%x\n", desc->desc.buffPtr); len += snprintf(&str[len], n-len, + " buf_len_o 0x%x\n", desc->desc.origBufferLen); + len += snprintf(&str[len], n-len, + " buf_len 0x%x\n", desc->desc.buffLen); + len += snprintf(&str[len], n-len, " pool %i\n", odp_buf_to_pool(buf)); len += snprintf(&str[len], n-len, " free_queue %u\n", desc->free_queue); len += snprintf(&str[len], n-len, "\n"); - ti_em_rh_dump_mem(desc, sizeof(*desc), "Descriptor dump"); - ti_em_rh_dump_mem(desc->buf_vaddr, 64, "Buffer start"); - return len; } @@ -77,11 +78,18 @@ void odp_buffer_print(odp_buffer_t buf) int max_len = 512; char str[max_len]; int len; + odp_buffer_hdr_t *desc; len = odp_buffer_snprint(str, max_len-1, buf); + if (!len) + return; str[len] = 0; printf("\n%s\n", str); + + desc = odp_buf_to_hdr(buf); + ti_em_rh_dump_mem(desc, sizeof(*desc), "Descriptor dump"); + ti_em_rh_dump_mem(desc->buf_vaddr, 64, "Buffer start"); } void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t buf_src) diff --git a/platform/linux-keystone2/source/odp_buffer_pool.c b/platform/linux-keystone2/source/odp_buffer_pool.c index 9a2f6cb..6ce02d4 100644 --- a/platform/linux-keystone2/source/odp_buffer_pool.c +++ b/platform/linux-keystone2/source/odp_buffer_pool.c @@ -163,6 +163,7 @@ static int link_bufs(pool_entry_t *pool) (void *)buf_addr.v, (void *)buf_addr.p); ODP_DBG("%s: num_bufs: %u, desc_index: %u\n", __func__, num_bufs, desc_index); + ODP_DBG("%s: free_queue: %u\n", __func__, pool->s.free_queue); /* FIXME: Need to define error codes somewhere */ if (desc_index == (uint32_t)-1) { @@ -195,10 +196,9 @@ static int link_bufs(pool_entry_t *pool) /* Set defaults in descriptor */ hdr->desc.descInfo = (Cppi_DescType_HOST << 30) | (Cppi_PSLoc_PS_IN_DESC << 22) | - (buf_size & 0xFFFF); + (pool->s.payload_size & 0xFFFF); hdr->desc.packetInfo = (((uint32_t) Cppi_EPIB_EPIB_PRESENT) << 31) | - (0x2 << 16) | (((uint32_t) Cppi_ReturnPolicy_RETURN_BUFFER) << 15) | (pool->s.free_queue & 0x3FFF); hdr->desc.origBuffPtr = buf_addr.p; @@ -308,3 +308,9 @@ void odp_buffer_pool_print(odp_buffer_pool_t pool_id) { (void)pool_id; } + +uint32_t _odp_pool_get_free_queue(odp_buffer_pool_t pool_id) +{ + pool_entry_t *pool = get_pool_entry(pool_id); + return pool->s.free_queue; +} diff --git a/platform/linux-keystone2/source/odp_init.c b/platform/linux-keystone2/source/odp_init.c index 0b36960..f832551 100644 --- a/platform/linux-keystone2/source/odp_init.c +++ b/platform/linux-keystone2/source/odp_init.c @@ -12,7 +12,7 @@ #include #include #include -#include +#include /* * Make region_configs[] global, because hw_config is saved in @@ -49,7 +49,7 @@ static int ti_init_hw_config(void) reg_config = ®ion_configs[TI_EM_RH_PUBLIC]; reg_config->region_idx = TI_ODP_PUBLIC_REGION_IDX; reg_config->desc_size = - ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t)); + ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_packet_hdr_t)); reg_config->desc_num = TI_ODP_PUBLIC_DESC_NUM; reg_config->desc_base = TI_ODP_PUBLIC_DESC_BASE; reg_config->desc_vbase = TI_ODP_PUBLIC_DESC_VBASE; diff --git a/platform/linux-keystone2/source/odp_packet.c b/platform/linux-keystone2/source/odp_packet.c index f03d849..37a0d46 100644 --- a/platform/linux-keystone2/source/odp_packet.c +++ b/platform/linux-keystone2/source/odp_packet.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include @@ -16,20 +17,13 @@ #include static inline uint8_t parse_ipv4(odp_packet_hdr_t *pkt_hdr, odp_ipv4hdr_t *ipv4, - size_t *offset_out); + size_t *offset_out); static inline uint8_t parse_ipv6(odp_packet_hdr_t *pkt_hdr, odp_ipv6hdr_t *ipv6, - size_t *offset_out); + size_t *offset_out); void odp_packet_init(odp_packet_t pkt) { odp_packet_hdr_t *const pkt_hdr = odp_packet_hdr(pkt); - const size_t start_offset = ODP_FIELD_SIZEOF(odp_packet_hdr_t, buf_hdr); - uint8_t *start; - size_t len; - - start = (uint8_t *)pkt_hdr + start_offset; - len = ODP_OFFSETOF(odp_packet_hdr_t, payload) - start_offset; - memset(start, 0, len); pkt_hdr->l2_offset = (uint32_t) ODP_PACKET_OFFSET_INVALID; pkt_hdr->l3_offset = (uint32_t) ODP_PACKET_OFFSET_INVALID; @@ -48,12 +42,12 @@ odp_buffer_t odp_buffer_from_packet(odp_packet_t pkt) void odp_packet_set_len(odp_packet_t pkt, size_t len) { - odp_packet_hdr(pkt)->frame_len = len; + ti_em_cppi_set_pktlen(&odp_packet_hdr(pkt)->buf_hdr.desc, len); } size_t odp_packet_get_len(odp_packet_t pkt) { - return odp_packet_hdr(pkt)->frame_len; + return ti_em_cppi_get_pktlen(&odp_packet_hdr(pkt)->buf_hdr.desc); } uint8_t *odp_packet_buf_addr(odp_packet_t pkt) @@ -130,8 +124,9 @@ void odp_packet_set_l4_offset(odp_packet_t pkt, size_t offset) /** * Simple packet parser: eth, VLAN, IP, TCP/UDP/ICMP * - * Internal function: caller is resposible for passing only valid packet handles - * , lengths and offsets (usually done&called in packet input). + * Internal function: caller is responsible for passing only + * valid packet handles, lengths and offsets + * (usually done&called in packet input). * * @param pkt Packet handle * @param len Packet length in bytes @@ -150,7 +145,6 @@ void odp_packet_parse(odp_packet_t pkt, size_t len, size_t frame_offset) pkt_hdr->input_flags.eth = 1; pkt_hdr->frame_offset = frame_offset; - pkt_hdr->frame_len = len; if (odp_unlikely(len < ODP_ETH_LEN_MIN)) { pkt_hdr->error_flags.frame_len = 1; @@ -159,6 +153,9 @@ void odp_packet_parse(odp_packet_t pkt, size_t len, size_t frame_offset) pkt_hdr->input_flags.jumbo = 1; } + len -= 4; /* Crop L2 CRC */ + ti_em_cppi_set_pktlen(&pkt_hdr->buf_hdr.desc, len); + /* Assume valid L2 header, no CRC/FCS check in SW */ pkt_hdr->input_flags.l2 = 1; pkt_hdr->l2_offset = frame_offset; @@ -236,7 +233,7 @@ void odp_packet_parse(odp_packet_t pkt, size_t len, size_t frame_offset) } static inline uint8_t parse_ipv4(odp_packet_hdr_t *pkt_hdr, odp_ipv4hdr_t *ipv4, - size_t *offset_out) + size_t *offset_out) { uint8_t ihl; uint16_t frag_offset; @@ -276,7 +273,7 @@ static inline uint8_t parse_ipv4(odp_packet_hdr_t *pkt_hdr, odp_ipv4hdr_t *ipv4, } static inline uint8_t parse_ipv6(odp_packet_hdr_t *pkt_hdr, odp_ipv6hdr_t *ipv6, - size_t *offset_out) + size_t *offset_out) { if (ipv6->next_hdr == ODP_IPPROTO_ESP || ipv6->next_hdr == ODP_IPPROTO_AH) { @@ -321,12 +318,14 @@ void odp_packet_print(odp_packet_t pkt) len += snprintf(&str[len], n-len, " l4_offset %u\n", hdr->l4_offset); len += snprintf(&str[len], n-len, - " frame_len %u\n", hdr->frame_len); + " packet len %u\n", odp_packet_get_len(pkt)); len += snprintf(&str[len], n-len, " input %u\n", hdr->input); str[len] = '\0'; printf("\n%s\n", str); + ti_em_rh_dump_mem(hdr, sizeof(*hdr), "Descriptor dump"); + ti_em_rh_dump_mem(hdr->buf_hdr.buf_vaddr, 64, "Buffer start"); } int odp_packet_copy(odp_packet_t pkt_dst, odp_packet_t pkt_src) diff --git a/platform/linux-keystone2/source/odp_packet_io.c b/platform/linux-keystone2/source/odp_packet_io.c new file mode 100644 index 0000000..1ded021 --- /dev/null +++ b/platform/linux-keystone2/source/odp_packet_io.c @@ -0,0 +1,436 @@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#ifdef ODP_HAVE_NETMAP +#include +#endif +#include +#include +#include +#include +#include +#include + +#include +#ifdef ODP_HAVE_NETMAP +#include +#endif + +#include + +typedef struct { + pktio_entry_t entries[ODP_CONFIG_PKTIO_ENTRIES]; +} pktio_table_t; + +static pktio_table_t *pktio_tbl; + +struct pktio_device pktio_devs[] = { + /* eth0 is used by Linux kernel */ + /* {.name = "eth0", .tx_hw_queue = 648, .rx_channel = 22, .rx_flow = 22, .port_id = 1}, */ + {.name = "eth1", .tx_hw_queue = 648, .rx_channel = 23, .rx_flow = 23, .port_id = 2}, + {.name = "eth2", .tx_hw_queue = 648, .rx_channel = 24, .rx_flow = 24, .port_id = 3}, + {.name = "eth3", .tx_hw_queue = 648, .rx_channel = 25, .rx_flow = 25, .port_id = 4}, +}; + +static struct pktio_device *_odp_pktio_dev_lookup(const char *name) +{ + int i; + int num = sizeof(pktio_devs)/sizeof(pktio_devs[0]); + for (i = 0; i < num; i++) + if (!strncmp(pktio_devs[i].name, name, PKTIO_DEV_MAX_NAME_LEN)) + return &pktio_devs[i]; + return NULL; +} + +static pktio_entry_t *get_entry(odp_pktio_t id) +{ + if (odp_unlikely(id == ODP_PKTIO_INVALID || + id > ODP_CONFIG_PKTIO_ENTRIES)) + return NULL; + + return &pktio_tbl->entries[id - 1]; +} + +int odp_pktio_init_global(void) +{ + pktio_entry_t *pktio_entry; + int id, i; + int dev_num = sizeof(pktio_devs)/sizeof(pktio_devs[0]); + + pktio_tbl = odp_shm_reserve("odp_pktio_entries", + sizeof(pktio_table_t), + sizeof(pktio_entry_t)); + if (pktio_tbl == NULL) + return -1; + + memset(pktio_tbl, 0, sizeof(pktio_table_t)); + + for (id = 1; id <= ODP_CONFIG_PKTIO_ENTRIES; ++id) { + pktio_entry = get_entry(id); + + odp_spinlock_init(&pktio_entry->s.lock); + } + + /* Close all used RX channels */ + for (i = 0; i < dev_num; i++) + ti_em_osal_cppi_rx_channel_close(Cppi_CpDma_PASS_CPDMA, + pktio_devs[i].rx_channel); + + return 0; +} + +int odp_pktio_init_local(void) +{ + return 0; +} + +static int is_free(pktio_entry_t *entry) +{ + return (entry->s.taken == 0); +} + +static void set_free(pktio_entry_t *entry) +{ + entry->s.taken = 0; +} + +static void set_taken(pktio_entry_t *entry) +{ + entry->s.taken = 1; +} + +static void lock_entry(pktio_entry_t *entry) +{ + odp_spinlock_lock(&entry->s.lock); +} + +static void unlock_entry(pktio_entry_t *entry) +{ + odp_spinlock_unlock(&entry->s.lock); +} + +static odp_pktio_t alloc_lock_pktio_entry(odp_pktio_params_t *params) +{ + odp_pktio_t id; + pktio_entry_t *entry; + int i; + (void)params; + for (i = 0; i < ODP_CONFIG_PKTIO_ENTRIES; ++i) { + entry = &pktio_tbl->entries[i]; + if (is_free(entry)) { + lock_entry(entry); + if (is_free(entry)) { + set_taken(entry); + entry->s.inq_default = ODP_QUEUE_INVALID; + entry->s.outq_default = ODP_QUEUE_INVALID; + id = i + 1; + return id; /* return with entry locked! */ + } + unlock_entry(entry); + } + } + + return ODP_PKTIO_INVALID; +} + +static int free_pktio_entry(odp_pktio_t id) +{ + pktio_entry_t *entry = get_entry(id); + + if (entry == NULL) + return -1; + + set_free(entry); + + return 0; +} + +odp_pktio_t odp_pktio_open(const char *dev, odp_buffer_pool_t pool, + odp_pktio_params_t *params) +{ + odp_pktio_t id; + pktio_entry_t *pktio_entry; + char name[ODP_QUEUE_NAME_LEN]; + queue_entry_t *queue_entry; + odp_queue_t qid = ODP_QUEUE_INVALID; + + if (params == NULL) { + ODP_ERR("Invalid pktio params\n"); + return ODP_PKTIO_INVALID; + } + + ODP_DBG("Allocating HW pktio\n"); + + id = alloc_lock_pktio_entry(params); + if (id == ODP_PKTIO_INVALID) { + ODP_ERR("No resources available.\n"); + return ODP_PKTIO_INVALID; + } + /* if successful, alloc_pktio_entry() returns with the entry locked */ + + pktio_entry = get_entry(id); + + /* Create a default output queue for each pktio resource */ + snprintf(name, sizeof(name), "%i-pktio_outq_default", (int)id); + name[ODP_QUEUE_NAME_LEN-1] = '\0'; + + pktio_entry->s.dev = _odp_pktio_dev_lookup(dev); + if (!pktio_entry->s.dev) { + free_pktio_entry(id); + id = ODP_PKTIO_INVALID; + goto unlock; + } + + qid = _odp_queue_create(name, ODP_QUEUE_TYPE_PKTOUT, NULL, + pktio_entry->s.dev->tx_hw_queue); + ODP_DBG("Created queue %u for hw queue %d\n", (uint32_t)qid, + pktio_entry->s.dev->tx_hw_queue); + if (qid == ODP_QUEUE_INVALID) { + free_pktio_entry(id); + id = ODP_PKTIO_INVALID; + goto unlock; + } + pktio_entry->s.in_pool = pool; + pktio_entry->s.outq_default = qid; + + queue_entry = queue_to_qentry(qid); + queue_entry->s.pktout = id; + queue_entry->s.out_port_id = pktio_entry->s.dev->port_id; +unlock: + unlock_entry(pktio_entry); + return id; +} + +int odp_pktio_close(odp_pktio_t id) +{ + pktio_entry_t *entry; + int res = -1; + + entry = get_entry(id); + if (entry == NULL) + return -1; + + lock_entry(entry); + if (!is_free(entry)) { + /* FIXME: Here rx/tx channels should be closed */ + res |= free_pktio_entry(id); + } + + unlock_entry(entry); + + if (res != 0) + return -1; + + return 0; +} + +void odp_pktio_set_input(odp_packet_t pkt, odp_pktio_t pktio) +{ + odp_packet_hdr(pkt)->input = pktio; +} + +odp_pktio_t odp_pktio_get_input(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input; +} + +int odp_pktio_recv(odp_pktio_t id, odp_packet_t pkt_table[], unsigned len) +{ + pktio_entry_t *pktio_entry = get_entry(id); + unsigned pkts = 0; + odp_buffer_t buf; + + if (pktio_entry == NULL) + return -1; + + lock_entry(pktio_entry); + + if (pktio_entry->s.inq_default == ODP_QUEUE_INVALID) { + char name[ODP_QUEUE_NAME_LEN]; + odp_queue_param_t qparam; + odp_queue_t inq_def; + /* + * Create a default input queue. + * FIXME: IT is a kind of WA for current ODP API usage. + * It should be revised. + */ + ODP_DBG("Creating default input queue\n"); + qparam.sched.prio = ODP_SCHED_PRIO_DEFAULT; + qparam.sched.sync = ODP_SCHED_SYNC_NONE; + qparam.sched.group = ODP_SCHED_GROUP_DEFAULT; + snprintf(name, sizeof(name), "%i-pktio_inq_default", (int)id); + name[ODP_QUEUE_NAME_LEN-1] = '\0'; + inq_def = odp_queue_create(name, ODP_QUEUE_TYPE_PKTIN, &qparam); + if (inq_def == ODP_QUEUE_INVALID) { + ODP_ERR("pktio queue creation failed\n"); + goto unlock; + } + + if (odp_pktio_inq_setdef(id, inq_def)) { + ODP_ERR("default input-Q setup\n"); + goto unlock; + } + } + + for (pkts = 0; pkts < len; pkts++) { + buf = odp_queue_deq(pktio_entry->s.inq_default); + if (!odp_buffer_is_valid(buf)) + break; + + pkt_table[pkts] = odp_packet_from_buffer(buf); + } +unlock: + unlock_entry(pktio_entry); + return pkts; +} + +int odp_pktio_send(odp_pktio_t id, odp_packet_t pkt_table[], unsigned len) +{ + pktio_entry_t *pktio_entry = get_entry(id); + unsigned pkts; + int ret; + + if (pktio_entry == NULL) + return -1; + + lock_entry(pktio_entry); + + for (pkts = 0; pkts < len; pkts++) { + ret = odp_queue_enq(pktio_entry->s.outq_default, + odp_buffer_from_packet(pkt_table[pkts])); + if (ret) + break; + } + unlock_entry(pktio_entry); + return pkts; +} + +int odp_pktio_inq_setdef(odp_pktio_t id, odp_queue_t queue) +{ + pktio_entry_t *pktio_entry = get_entry(id); + queue_entry_t *qentry = queue_to_qentry(queue); + + if (pktio_entry == NULL || qentry == NULL) + return -1; + + if (qentry->s.type != ODP_QUEUE_TYPE_PKTIN) + return -1; + + pktio_entry->s.inq_default = queue; + { + uint32_t free_queue = + _odp_pool_get_free_queue(pktio_entry->s.in_pool); + ti_em_osal_cppi_rx_channel_close(Cppi_CpDma_PASS_CPDMA, + pktio_entry->s.dev->rx_channel); + ti_em_osal_cppi_rx_flow_open(Cppi_CpDma_PASS_CPDMA, + pktio_entry->s.dev->rx_flow, + qentry->s.hw_queue, + free_queue, + 0); + ti_em_osal_cppi_rx_channel_open(Cppi_CpDma_PASS_CPDMA, + pktio_entry->s.dev->rx_channel); + ODP_DBG("%s: Opened rx flow %u with dest queue: %u and free queue: %u\n", + __func__, + pktio_entry->s.dev->rx_flow, + qentry->s.hw_queue, + free_queue); + } + + queue_lock(qentry); + qentry->s.pktin = id; + qentry->s.status = QUEUE_STATUS_SCHED; + queue_unlock(qentry); + + odp_schedule_queue(queue, qentry->s.param.sched.prio); + + return 0; +} + +int odp_pktio_inq_remdef(odp_pktio_t id) +{ + return odp_pktio_inq_setdef(id, ODP_QUEUE_INVALID); +} + +odp_queue_t odp_pktio_inq_getdef(odp_pktio_t id) +{ + pktio_entry_t *pktio_entry = get_entry(id); + + if (pktio_entry == NULL) + return ODP_QUEUE_INVALID; + + return pktio_entry->s.inq_default; +} + +odp_queue_t odp_pktio_outq_getdef(odp_pktio_t id) +{ + pktio_entry_t *pktio_entry = get_entry(id); + + if (pktio_entry == NULL) + return ODP_QUEUE_INVALID; + + return pktio_entry->s.outq_default; +} + +int pktout_enqueue(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr) +{ + /* + * Set port number directly in a descriptor. + * TODO: Remove it when PA will be used. + */ + ti_em_cppi_set_psflags(&buf_hdr->desc, queue->s.out_port_id); + return queue_enq(queue, buf_hdr); +} + +int pktout_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) +{ + int i; + uint32_t port_id = queue->s.out_port_id; + for (i = 0; i < num; i++) + ti_em_cppi_set_psflags(&buf_hdr[i]->desc, port_id); + return queue_enq_multi(queue, buf_hdr, num); +} + +static inline void update_in_packet(odp_buffer_hdr_t *buf_hdr, + odp_pktio_t pktin) +{ + if (!buf_hdr) + return; + + odp_buffer_t buf = hdr_to_odp_buf(buf_hdr); + odp_packet_t pkt = odp_packet_from_buffer(buf); + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + size_t len = odp_packet_get_len(pkt); + pkt_hdr->input = pktin; + odp_packet_parse(pkt, len, 0); +} + +odp_buffer_hdr_t *pktin_dequeue(queue_entry_t *queue) +{ + odp_buffer_hdr_t *buf_hdr; + buf_hdr = queue_deq(queue); + + update_in_packet(buf_hdr, queue->s.pktin); + return buf_hdr; +} + +int pktin_deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) +{ + int i; + num = queue_deq_multi(queue, buf_hdr, num); + + for (i = 0; i < num; i++) + update_in_packet(buf_hdr[i], queue->s.pktin); + return num; +} diff --git a/platform/linux-keystone2/source/odp_queue.c b/platform/linux-keystone2/source/odp_queue.c index 8e6c2fe..031eeff 100644 --- a/platform/linux-keystone2/source/odp_queue.c +++ b/platform/linux-keystone2/source/odp_queue.c @@ -66,16 +66,16 @@ static void queue_init(queue_entry_t *queue, const char *name, switch (type) { case ODP_QUEUE_TYPE_PKTIN: - queue->s.enqueue = pktin_enqueue; + queue->s.enqueue = queue_enq; queue->s.dequeue = pktin_dequeue; - queue->s.enqueue_multi = pktin_enq_multi; + queue->s.enqueue_multi = queue_enq_multi; queue->s.dequeue_multi = pktin_deq_multi; break; case ODP_QUEUE_TYPE_PKTOUT: queue->s.enqueue = pktout_enqueue; - queue->s.dequeue = pktout_dequeue; + queue->s.dequeue = queue_deq; queue->s.enqueue_multi = pktout_enq_multi; - queue->s.dequeue_multi = pktout_deq_multi; + queue->s.dequeue_multi = queue_deq_multi; break; default: queue->s.enqueue = queue_enq;