From patchwork Wed Nov 12 23:22:01 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 40700 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f71.google.com (mail-wg0-f71.google.com [74.125.82.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3902C244AB for ; Wed, 12 Nov 2014 23:28:20 +0000 (UTC) Received: by mail-wg0-f71.google.com with SMTP id b13sf7267624wgh.6 for ; Wed, 12 Nov 2014 15:28:19 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:mime-version:subject:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:errors-to:sender :x-original-sender:x-original-authentication-results:mailing-list :content-type:content-transfer-encoding; bh=pLnAoM+Pb4ao3FLprRVovZ9BqklYoVPq3kpAb4i72oo=; b=DMHeVYfaU81RT9Zsh2yPwWnYMyR3kWcEgKTS24EMTTJQmZrR/snmi1DgujfNc3TU8k gk6vFuIsaE/8VGexCmCnVSK/kBxXcXsPg+ZTlzIVeurfkg4kcxScQdpzjlK+dYE4eeAd myLi6q+JCwkDB/khdbxbOkG26hacH7hN5jL6s39cqSkvPmZoI+jgt4RsRu2ap0J/2nZ4 iNlk7UxTdVtls4QjzcKFhRrhkUk4Nlx7Z8WIy6LP/Q+qROBtHCR9HCG2L2cWKgfzUbl0 kJ3VPqnC7CfeFhHOEeSr0w2fDiLlJE34jEbTz+yWyD5doHWAhb3xdO35uEqN0KwkQ/w5 cQAw== X-Gm-Message-State: ALoCoQmvN0WDLiKL7uuU07SKeJDEU97WYl25BJL771Av8qPSpVAzyiGyjwyt7TP5bqgw4qLyeVTM X-Received: by 10.181.8.194 with SMTP id dm2mr7200256wid.2.1415834899518; Wed, 12 Nov 2014 15:28:19 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.36.138 with SMTP id q10ls780720laj.83.gmail; Wed, 12 Nov 2014 15:28:18 -0800 (PST) X-Received: by 10.112.169.106 with SMTP id ad10mr44875323lbc.13.1415834898376; Wed, 12 Nov 2014 15:28:18 -0800 (PST) Received: from mail-la0-f44.google.com (mail-la0-f44.google.com. [209.85.215.44]) by mx.google.com with ESMTPS id u10si22886236lbp.15.2014.11.12.15.28.18 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 12 Nov 2014 15:28:18 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) client-ip=209.85.215.44; Received: by mail-la0-f44.google.com with SMTP id hz20so2172431lab.31 for ; Wed, 12 Nov 2014 15:28:18 -0800 (PST) X-Received: by 10.113.5.7 with SMTP id ci7mr45896194lbd.9.1415834898068; Wed, 12 Nov 2014 15:28:18 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp510746lbc; Wed, 12 Nov 2014 15:28:15 -0800 (PST) X-Received: by 10.140.43.196 with SMTP id e62mr63115258qga.10.1415834894704; Wed, 12 Nov 2014 15:28:14 -0800 (PST) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id p12si43915575qaa.132.2014.11.12.15.28.13 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 12 Nov 2014 15:28:14 -0800 (PST) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1XohKd-0007KD-Rl; Wed, 12 Nov 2014 23:28:11 +0000 Received: from mail-ob0-f174.google.com ([209.85.214.174]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1XohF2-00072T-RY for lng-odp@lists.linaro.org; Wed, 12 Nov 2014 23:22:25 +0000 Received: by mail-ob0-f174.google.com with SMTP id uz6so9955338obc.19 for ; Wed, 12 Nov 2014 15:22:19 -0800 (PST) X-Received: by 10.182.163.33 with SMTP id yf1mr41435623obb.0.1415834539394; Wed, 12 Nov 2014 15:22:19 -0800 (PST) Received: from localhost.localdomain (cpe-24-28-70-239.austin.res.rr.com. [24.28.70.239]) by mx.google.com with ESMTPSA id y6sm3863354oes.0.2014.11.12.15.22.17 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 12 Nov 2014 15:22:18 -0800 (PST) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Wed, 12 Nov 2014 17:22:01 -0600 Message-Id: <1415834521-455-4-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1415834521-455-1-git-send-email-bill.fischofer@linaro.org> References: <1415834521-455-1-git-send-email-bill.fischofer@linaro.org> MIME-Version: 1.0 X-Topics: patch Subject: [lng-odp] [PATCHv3 ODP v1.0 Buffer/Packet APIs 3/3] Code changes for ODP v1.0 Buffer/Packet APIs X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Bill Fischofer --- platform/linux-generic/include/api/odp_buffer.h | 490 ++++- .../linux-generic/include/api/odp_buffer_pool.h | 391 +++- platform/linux-generic/include/api/odp_config.h | 6 + platform/linux-generic/include/api/odp_packet.h | 2231 ++++++++++++++++++-- platform/linux-generic/include/api/odp_packet_io.h | 4 +- platform/linux-generic/include/api/odp_typedefs.h | 60 + .../linux-generic/include/odp_buffer_inlines.h | 234 ++ .../linux-generic/include/odp_buffer_internal.h | 118 +- .../include/odp_buffer_pool_internal.h | 159 +- .../linux-generic/include/odp_packet_internal.h | 130 +- .../linux-generic/include/odp_timer_internal.h | 15 +- platform/linux-generic/odp_buffer.c | 265 ++- platform/linux-generic/odp_buffer_pool.c | 673 +++--- platform/linux-generic/odp_packet.c | 1202 ++++++++--- 14 files changed, 4900 insertions(+), 1078 deletions(-) create mode 100644 platform/linux-generic/include/api/odp_typedefs.h create mode 100644 platform/linux-generic/include/odp_buffer_inlines.h diff --git a/platform/linux-generic/include/api/odp_buffer.h b/platform/linux-generic/include/api/odp_buffer.h index 289e0eb..b478e59 100644 --- a/platform/linux-generic/include/api/odp_buffer.h +++ b/platform/linux-generic/include/api/odp_buffer.h @@ -8,7 +8,88 @@ /** * @file * - * ODP buffer descriptor + * @par Buffer + * A buffer is an element of a buffer pool used for storing + * information. Buffers are referenced by an abstract handle of type + * odp_buffer_t. Buffers have associated buffer types that describe + * their intended use and the type of metadata that is associated + * with them. Buffers of a specific type may be referenced for + * processing by cores or by offload engines. Buffers are also + * transmitted via queues from one processing element to another. + * + * @par Buffer Types + * An ODP buffer type is identified by the + * odp_buffer_type_e enum. It defines the semantics that are to be + * attached to the buffer and defines the type of metadata that is + * associated with it. ODP implementations MUST support the following + * buffer types: + * + * - ODP_BUFFER_TYPE_RAW + * This is the “basic” buffer type + * which simply consists of a single fixed-sized block of contiguous + * memory. Buffers of this type do not support user metadata and the + * only built-in metadata supported for this type of buffer are those + * that are statically computable, such as pool and size. This type of + * buffer is entirely under application control and most of the buffer + * APIs defined in this document are not available. APIs for this + * type of buffer are described in this document. + * + * - ODP_BUFFER_TYPE_PACKET + * This buffer type is suitable for receiving, + * processing, and transmitting network packet data. Included in this + * type is a rich set of primitives for manipulating buffer aggregates + * and for storing system and user metadata. APIs for this type of + * buffer are described here and in the ODP Packet Management Design + * document. + * + * - ODP_BUFFER_TYPE_TIMEOUT + * This buffer type is suitable for + * representing timer timeout events. Does not support buffer + * aggregation but does support user metadata. APIs for this type of + * buffer are described here and in the ODP Timer Management Design + * document. + * + * - ODP_BUFFER_TYPE_ANY + * A “universal” buffer type capable of + * storing information needed for any other buffer type. It is not + * intended to be used directly, but exists for possible + * implementation convenience. + * + * @par Metadata + * Metadata is additional information relating to a + * buffer that is distinct from the application data normally held in + * the buffer. Implementations MAY choose to implement metadata as + * contiguous with a buffer (e.g., in an implementation-managed prefix + * area of the buffer) or in a physically separate metadata area + * efficiently accessible by the implementation using the same + * identifier as the buffer itself. ODP applications MUST NOT make + * assumptions about the addressability relationship between a buffer + * and its associated metadata, or between metadata items. + * Application use of metadata MUST only be via accessor functions. + * + * @par Note on OPTIONAL APIs + * Every conforming ODP implementation MUST + * provide implementations for each API described here. If an API is + * designated as OPTIONAL, this means that it is acceptable for an + * implementation to do nothing except return + * ODP_FUNCTION_NOT_AVAILABLE in response to this call. Note that this + * may limit the range of ODP applications supported by a given + * implementation since applications needing the functionality of the + * optional API will likely choose to deploy on other ODP platforms. + * + * @par + * APIs are designated as OPTIONAL under two conditions: + * + * -# The API is expected to be difficult to provide efficiently on all + * platforms. + * + * -# A significant number of ODP applications are expected to exist + * that will not need or use this API. + * + * @par + * Under these circumstances, an API is designated as OPTIONAL to + * permit ODP implementations to be conformant while still expecting + * to be able to run a significant number of ODP applications. */ #ifndef ODP_BUFFER_H_ @@ -20,74 +101,441 @@ extern "C" { #include - +#include /** @defgroup odp_buffer ODP BUFFER - * Operations on a buffer. - * @{ + * + * @{ */ /** - * ODP buffer + * ODP buffer options + * + * @note These options are additive so an application can simply + * specify a buf_opts by ORing together the options needed. Note that + * buffer pool options are themselves OPTIONAL and a given + * implementation MAY fail the buffer pool creation request with an + * appropriate errno if the requested option is not supported by the + * underlying ODP implementation, with the exception that UNSEGMENTED + * pools MUST be supported for non-packet types and for packet types + * as long as the requested size is less than the + * implementation-defined native packet segment size. + * + * Use ODP_BUFFER_OPTS_NONE to specify default buffer pool options + * with no additions. The ODP_BUFFER_OPTS_UNSEGMENTED option + * specifies that the buffer pool should be unsegmented. + * + * @par Segmented vs. Unsegmented Buffer Pools + * By default, the buffers + * in ODP buffer pools are logical buffers that support transparent + * segmentation managed by ODP on behalf of the application and have a + * rich set of associated semantics as described here. + * ODP_BUFFER_OPTS_UNSEGMENTED indicates that the buf_size specified + * for the pool should be regarded as a fixed buffer size for all pool + * elements and that segmentation support is not needed for the pool. + * This MAY result in greater efficiency on some implementations. For + * packet processing, a typical use of unsegmented pools would be in + * conjunction with classification rules that sort packets into + * different pools based on their lengths, thus ensuring that each + * packet occupies a single segment within an appropriately-sized + * buffer. */ -typedef uint32_t odp_buffer_t; +typedef enum odp_buffer_opts { + ODP_BUFFER_OPTS_NONE = 0, /**< Default, no buffer options */ + ODP_BUFFER_OPTS_UNSEGMENTED = 1, /**< No segments, please */ +} odp_buffer_opts_e; -#define ODP_BUFFER_INVALID (0xffffffff) /**< Invalid buffer */ +/** + * Error returns + */ +#define ODP_BUFFER_INVALID (odp_buffer_t)(-1) +#define ODP_SEGMENT_INVALID (odp_buffer_segment_t)(-1) +/** + * Obtain buffer pool handle of a buffer + * + * @param[in] buf Buffer handle + * + * @return Buffer pool the buffer was allocated from + * + * @note This routine is an accessor function that returns the handle + * of the buffer pool containing the referenced buffer. + */ +odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf); /** * Buffer start address * - * @param buf Buffer handle + * @param[in] buf Buffer handle * * @return Buffer start address */ void *odp_buffer_addr(odp_buffer_t buf); /** - * Buffer maximum data size + * Buffer application data size + * + * @param[in] buf Buffer handle * - * @param buf Buffer handle + * @return Buffer application data size * - * @return Buffer maximum data size + * @note The size returned by this rouine is the size of the + * application data contained within the buffer and does not include + * any inplementation-defined overhead to support metadata, etc. ODP + * does not define APIs for determining the amount of storage that is + * physically allocated by an implementation to support ODP buffers. */ size_t odp_buffer_size(odp_buffer_t buf); /** * Buffer type * - * @param buf Buffer handle + * @param[in] buf Buffer handle * * @return Buffer type */ -int odp_buffer_type(odp_buffer_t buf); +odp_buffer_type_e odp_buffer_type(odp_buffer_t buf); -#define ODP_BUFFER_TYPE_INVALID (-1) /**< Buffer type invalid */ -#define ODP_BUFFER_TYPE_ANY 0 /**< Buffer that can hold any other - buffer type */ -#define ODP_BUFFER_TYPE_RAW 1 /**< Raw buffer, no additional metadata */ -#define ODP_BUFFER_TYPE_PACKET 2 /**< Packet buffer */ -#define ODP_BUFFER_TYPE_TIMEOUT 3 /**< Timeout buffer */ +/** + * Get address and size of user metadata for buffer + * + * @param[in] buf Buffer handle + * @param[out] udata_size Number of bytes of user metadata available + * at the returned address + * @return Address of the user metadata for this buffer + * or NULL if the buffer has no user metadata. + */ +void *odp_buffer_udata(odp_buffer_t buf, size_t *udata_size); +/** + * Get address of user metadata for buffer + * + * @param[in] buf Buffer handle + * + * @return Address of the user metadata for this buffer + * or NULL if the buffer has no user metadata. + * + * @note This is a "fastpath" version of odp_buffer_udata() since it + * omits returning the size of the user metadata area. Callers are + * expected to know and honor this limit nonetheless. + */ +void *odp_buffer_udata_addr(odp_buffer_t buf); /** * Tests if buffer is valid * - * @param buf Buffer handle + * @param[in] buf Buffer handle * * @return 1 if valid, otherwise 0 + * + * @note Since buffer operations typically occur in fastpath sections + * of applications, by default most ODP APIs assume that valid buffer + * handles are passed to them and results are undefined if this + * assumption is not met. This routine exists to enable an + * application to request explicit validation of a buffer handle. It + * is understood that the performance of this operation MAY vary + * considerably on a per-implementation basis. */ int odp_buffer_is_valid(odp_buffer_t buf); /** + * Tests if buffer is segmented + * + * @param[in] buf Buffer handle + * + * @return 1 if buffer has more then one segment, otherwise 0 + * + * @note This routine behaves identically to the test + * odp_buffer_segment_count() > 1, but is potentially more efficient + * and represents the preferred method of determining a buffer's + * segmentation status. + */ +int odp_buffer_is_segmented(odp_buffer_t buf); + +/** * Print buffer metadata to STDOUT * - * @param buf Buffer handle + * @param[in] buf Buffer handle * + * @note This routine is intended for diagnostic use and prints + * implementation-defined information concerning the buffer to the ODP + * LOG. It's provision is OPTIONAL. */ void odp_buffer_print(odp_buffer_t buf); /** + * Get count of number of segments in a buffer + * + * @param[in] buf Buffer handle + * + * @return Count of the number of segments in buf + */ +size_t odp_buffer_segment_count(odp_buffer_t buf); + +/** + * Get the segment identifier for a buffer segment by index + * + * @param[in] buf Buffer handle + * @param[in] ndx Segment index of segment of interest + * + * @return Segment handle or ODP_SEGMENT_INVALID if the + * supplied ndx is out of range. + */ +odp_buffer_segment_t odp_buffer_segment_by_index(odp_buffer_t buf, size_t ndx); + +/** + * Get the next segment handle for a buffer segment + * + * @param[in] buf Buffer handle + * @param[in] seg Segment identifier of the previous segment + * + * @return Segment identifier of next segment or ODP_SEGMENT_INVALID + * + * @note This routine returns the identifier (odp_buffer_segment_t) of + * the next buffer segment in a buffer aggregate. The input + * specifies the buffer and the previous segment identifier. There are + * three use cases for this routine: + * @note + * -# If the input seg is ODP_SEGMENT_START then the segment identifier returned + * is that of the first segment in the buffer. ODP_SEGMENT_NULL MAY be used + * as a synonym for ODP_SEGMENT_START for symmetry if desired. + * + * -# If the input seg is not the last segment in the buffer then the + * segment handle of the next segment following seg is returned. + * + * -# If the input seg is the segment identifier of the last segment in + * the buffer then ODP_SEGMENT_NULL is returned. + * + */ +odp_buffer_segment_t odp_buffer_segment_next(odp_buffer_t buf, + odp_buffer_segment_t seg); + +/** + * Get addressability for a specified buffer segment + * + * @param[in] buf Buffer handle + * @param[in] seg Segment handle of the segment to be mapped + * @param[in] seglen Returned number of bytes in this buffer segment + * available at the returned address + * + * @return Segment start address or NULL + * + * @note This routine is used to obtain addressability to a segment within + * a buffer aggregate at a specified segment identifier. The returned seglen + * indicates the number of bytes addressable at the returned address. + */ +void *odp_buffer_segment_map(odp_buffer_t buf, odp_buffer_segment_t seg, + size_t *seglen); + +/** + * Unmap a buffer segment + * + * @param[in] seg Buffer segment handle + * + * @note This routine is used to unmap a buffer segment previously + * mapped by odp_buffer_segment_map(). Following this call, + * applications MUST NOT attempt to reference the segment via any + * pointer returned from a previous odp_buffer_segment_map() call + * referring to it. It is intended to allow certain NUMA + * architectures to better manage the coherency of mapped segments. + * For non-NUMA architectures this routine will be a no-op. Note that + * implementations SHOULD implicitly unmap all buffer segments + * whenever a buffer is added to a queue as this indicates that the + * caller is relinquishing control of the buffer. + */ +void odp_buffer_segment_unmap(odp_buffer_segment_t seg); + +/** + * Get start address for a specified buffer offset + * + * @param[in] buf Buffer handle + * @param[in] offset Byte offset within the buffer to be addressed + * @param[out] seglen Returned number of bytes in this buffer + * segment available at returned address + * + * @return Offset start address or NULL + * + * @note This routine is used to obtain addressability to a segment + * within a buffer at a specified byte offset. Note that because the + * offset is independent of any implementation-defined physical + * segmentation the returned seglen may be “short” and will range from + * 1 to whatever physical segment size is used by the underlying + * implementation. + */ +void *odp_buffer_offset_map(odp_buffer_t buf, size_t offset, + size_t *seglen); + +/** + * Unmap a buffer segment by offset + * + * @param[in] buf Buffer handle + * @param[in] offset Buffer offset + * + * @note This routine is used to unmap a buffer segment previously + * mapped by odp_buffer_offset_map(). Following this call + * the application MUST NOT attempt to reference the segment via any + * pointer returned by a prior odp_buffer_offset_map() call relating + * to this offset. It is intended to allow certain NUMA architectures + * to better manage the coherency of mapped segments. For non-NUMA + * architectures this routine will be a no-op. Note that + * implementations SHOULD implicitly unmap all buffer segments + * whenever a buffer is added to a queue as this indicates that the + * caller is relinquishing control of the buffer. + */ +void odp_buffer_offset_unmap(odp_buffer_t buf, size_t offset); + +/** + * Split a buffer into two buffers at a specified split point + * + * @param[in] buf Handle of buffer to split + * @param[in] offset Byte offset within buf to split buffer + * + * @return Buffer handle of the created split buffer + * + * @note This routine splits a buffer into two buffers at the + * specified byte offset. The odp_buffer_t returned by the function + * is the handle of the new buffer created at the split point. If the + * original buffer was allocated from a buffer pool then the split is + * allocated from the same pool. If the original buffer was size + * bytes in length then upon return the original buffer is of size + * offset while the split buffer is of size (size-offset). + * + * @note Upon return from this function, the system metadata for both + * buffers has been updated appropriately by the call since system + * metadata maintenance is the responsibility of the ODP + * implementation. Any required updates to the user metadata is the + * responsibility of the caller. + */ +odp_buffer_t odp_buffer_split(odp_buffer_t buf, size_t offset); + +/** + * Join two buffers into a single buffer + * + * @param[in] buf1 Buffer handle of first buffer to join + * @param[in] buf2 Buffer handle of second buffer to join + * + * @return Buffer handle of the joined buffer + * + * @note This routine joins two buffers into a single buffer. Both + * buf1 and buf2 MUST be from the same buffer pool and the resulting + * joined buffer will be an element of that same pool. The + * application MUST NOT assume that either buf1 or buf2 survive the + * join or that the returned joined buffer is contiguous with or + * otherwise related to the input buffers. An implementation SHOULD + * free either or both input buffers if they are not reused as part of + * the construction of the returned joined buffer. If the join cannot + * be performed (e.g., if the two input buffers are not from the same + * buffer pool, insufficient space in the target buffer pool, etc.) + * then ODP_BUFFER_INVALID SHOULD be returned to indicate that the + * operation could not be performed, and an appropriate errno set. In + * such case the input buffers MUST NOT be freed as part of the failed + * join attempt and should be unchanged from their input values and + * content. + * + * @note The result of odp_buffer_join() is the logical concatenation + * of the two buffers using an implementation-defined buffer + * aggregation mechanism. The application data contents of the + * returned buffer is identical to that of the two joined input + * buffers however certain associated metadata (e.g., information + * about the buffer size) will likely differ. + * + * @note If user metadata is present in the buffer pool containing the + * input buffers, then the user metadata associated with the returned + * buffer MUST be copied by this routine from the source buf1. + */ +odp_buffer_t odp_buffer_join(odp_buffer_t buf1, odp_buffer_t buf2); + +/** + * Trim a buffer at a specified trim point + * + * @param[in] buf buffer handle of buffer to trim + * @param[in] offset byte offset within buf to trim + * + * @return Handle of the trimmed buffer or ODP_BUFFER_INVALID + * if the operation was not performed + * + * @note This routine discards bytes from the end of a buffer. It is + * logically equivalent to a split followed by a free of the split + * portion of the input buffer. The input offset must be less than or + * equal to the odp_buffer_size() of the input buffer. Upon + * successful return the odp_buffer_size() routine would now return + * offset as the size of the trimmed buffer. Note that the returned + * odp_buffer_t may not necessarily be the same as the input + * odp_buffer_t. The caller should use the returned value when + * referencing the trimmed buffer instead of the original in case they + * are different. + * + * @note If the input buf contains user metadata, then this data MUST + * be copied to the returned buffer if needed by the API + * implementation. + */ +odp_buffer_t odp_buffer_trim(odp_buffer_t buf, size_t offset); + +/** + * Extend a buffer for a specified number of bytes + * + * @param[in] buf buffer handle of buffer to expand + * @param[in] ext size, in bytes, of the extent to add to the + * existing buffer. + * + * @return Handle of the extended buffer or ODP_BUFFER_INVALID + * if the operation was not performed + * + * @note This routine extends a buffer by increasing its size by ext + * bytes. It is logically equivalent to an odp_buffer_join() of a + * buffer of size ext to the original buffer. Upon successful return + * the odp_buffer_size() routine would now return size+ext as the size + * of the extended buffer. + * + * @note Note that the returned odp_buffer_t may not necessarily be the + * same as the input odp_buffer_t. The caller should use the returned + * value when referencing the extended buffer instead of the original + * in case they are different. If the input buf contains user meta + * data, then this data MUST be copied to the returned buffer if + * needed by the API implementation. + */ +odp_buffer_t odp_buffer_extend(odp_buffer_t buf, size_t ext); + +/** + * Clone a buffer, returning an exact copy of it + * + * @param[in] buf buffer handle of buffer to duplicate + * + * @return Handle of the duplicated buffer or ODP_BUFFER_INVALID + * if the operation was not performed + * + * @note This routine allows an ODP buffer to be cloned in an + * implementation-defined manner. The application data contents of + * the returned odp_buffer_t is an exact copy of the application data + * of the input buffer. The implementation MAY perform this operation + * via reference counts, resegmentation, or any other technique it + * wishes to employ. The cloned buffer is an element of the same + * buffer pool as the input buf. If the input buf contains user meta + * data, then this data MUST be copied to the returned buffer by the + * ODP implementation. + */ +odp_buffer_t odp_buffer_clone(odp_buffer_t buf); + +/** + * Copy a buffer, returning an exact copy of it + * + * @param[in] buf buffer handle of buffer to copy + * @param[in] pool buffer pool to contain the copied buffer + * + * @return Handle of the copied buffer or ODP_BUFFER_INVALID + * if the operation was not performed + * + * @note This routine allows an ODP buffer to be copied in an + * implementation-defined manner to a specified buffer pool. The + * specified pool may or may not be different from the source buffer’s + * pool. The application data contents of the returned odp_buffer_t + * is an exact separate copy of the application data of the input + * buffer. If the input buf contains user metadata, then this data + * MUST be copied to the returned buffer by the ODP implementation. + */ +odp_buffer_t odp_buffer_copy(odp_buffer_t buf, odp_buffer_pool_t pool); + +/** * @} */ diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h b/platform/linux-generic/include/api/odp_buffer_pool.h index d04abf0..da08cc7 100644 --- a/platform/linux-generic/include/api/odp_buffer_pool.h +++ b/platform/linux-generic/include/api/odp_buffer_pool.h @@ -8,7 +8,43 @@ /** * @file * - * ODP buffer pool + * @par Buffer Pools + * Buffers are elements of buffer pools that represent an equivalence + * class of buffer objects that are managed by a buffer pool manager. + * ODP implementations MAY support buffer pool managers implemented in + * hardware, software, or a combination of the two. An ODP + * implementation MUST support at least one buffer pool and MAY + * support as many as it wishes. The implementation MAY support one + * or more predefined buffer pools that are not explicitly allocated + * by an ODP application. It SHOULD also support application creation + * of buffer pools via the odp_buffer_pool_create() API, however it + * MAY restrict the types of buffers that can be so created. + * + * @par + * Buffer pools are represented by the abstract type odp_buffer_pool_t + * that is returned by buffer pool creation and lookup/enumeration + * routines. Applications refer to buffer pools via a name of + * implementation-defined maximum length that MUST be a minimummap of + * eight characters in length and MAY be longer. It is RECOMMENDED + * that 32 character buffer pool names be supported to provide + * application naming flexibility. The supported maximum length of + * buffer pool names is exposed via the ODP_BUFFER_POOL_NAME_LEN + * predefined implementation limit. + * + * @par Segmented vs. Unsegmented Buffer Pools + * By default, the buffers in + * ODP buffer pools are logical buffers that support transparent + * segmentation managed by ODP on behalf of the application and have a + * rich set of associated semantics as described here. + * ODP_BUFFER_OPTS_UNSEGMENTED indicates that the buf_size specified + * for the pool should be regarded as a fixed buffer size for all pool + * elements and that segmentation support is not needed for the pool. + * This MAY result in greater efficiency on some implementations. For + * packet processing, a typical use of unsegmented pools would be in + * conjunction with classification rules that sort packets into + * different pools based on their lengths, thus ensuring that each + * packet occupies a single segment within an appropriately-sized + * buffer. */ #ifndef ODP_BUFFER_POOL_H_ @@ -18,9 +54,8 @@ extern "C" { #endif - - #include +#include #include /** @addtogroup odp_buffer @@ -31,78 +66,368 @@ extern "C" { /** Maximum queue name lenght in chars */ #define ODP_BUFFER_POOL_NAME_LEN 32 -/** Invalid buffer pool */ -#define ODP_BUFFER_POOL_INVALID 0 +/** + * Buffer initialization routine prototype + * + * @note Routines of this type MAY be passed as part of the + * odp_buffer_pool_init_t structure to be called whenever a + * buffer is allocated to initialize the user metadata + * associated with that buffer. + */ +typedef void (odp_buf_init_t)(odp_buffer_t buf, void *buf_init_arg); -/** ODP buffer pool */ -typedef uint32_t odp_buffer_pool_t; +/** + * Buffer pool parameters + * + * @param[in] buf_num Number of buffers that pool should contain + * @param[in] buf_size Size of application data in each buffer + * @param[in] buf_type Buffer type + * @param[in] buf_opts Buffer options + */ +typedef struct odp_buffer_pool_param_t { + size_t buf_num; /**< Number of buffers in this pool */ + size_t buf_size; /**< Application data size of each buffer */ + odp_buffer_type_e buf_type; /**< Buffer type */ + odp_buffer_opts_e buf_opts; /**< Buffer options */ +} odp_buffer_pool_param_t; /**< Type of buffer pool parameter struct */ +/** + * Buffer pool initialization parameters + * + * @param[in] udata_size Size of the user metadata for each buffer + * @param[in] buf_init Function pointer to be called to initialize the + * user metadata for each buffer in the pool. + * @param[in] buf_init_arg Argument to be passed to buf_init(). + * + */ +typedef struct odp_buffer_pool_init_t { + size_t udata_size; /**< Size of user metadata for each buffer */ + odp_buf_init_t *buf_init; /**< Buffer initialization routine to use */ + void *buf_init_arg; /**< Argument to be passed to buf_init() */ +} odp_buffer_pool_init_t; /**< Type of buffer initialization struct */ /** * Create a buffer pool * - * @param name Name of the pool (max ODP_BUFFER_POOL_NAME_LEN - 1 chars) - * @param base_addr Pool base address - * @param size Pool size in bytes - * @param buf_size Buffer size in bytes - * @param buf_align Minimum buffer alignment - * @param buf_type Buffer type + * @param[in] name Name of the pool + * (max ODP_BUFFER_POOL_NAME_LEN - 1 chars) + * + * @param[in] params Parameters controlling the creation of this + * buffer pool + * + * @param[in] init_params Parameters controlling the initialization of + * this buffer pool + * + * @return Buffer pool handle or ODP_BUFFER_POOL_NULL with errno set * - * @return Buffer pool handle + * @note This routine is used to create a buffer pool. It takes three + * arguments: the name of the pool to be created, a parameter + * structure that controls the pool creation, and an optional + * parameter that controls pool initialization. In the creation + * parameter structure, the application specifies the number of + * buffers that the pool should contain as well as the application + * data size for each buffer in the pool, the type of buffers it + * should contain, and their associated options. In the + * initialization parameters, the application specifies the size of + * the user metadata that should be associated with each buffer in + * the pool. If no user metadata is required, the init_params SHOULD + * be specified as NULL. If user metadata is requested, then + * udata_size SHOULD be set to the requested size of the per-buffer + * user metadata. Also specified is the address of an + * application-provided buffer initialization routine to be called for + * each buffer in the pool at the time the pool is initialized, or + * when the buffer is allocated. If no application buffer + * initialization is needed, then buf_init and buf_init_arg SHOULD be + * set to NULL. */ odp_buffer_pool_t odp_buffer_pool_create(const char *name, - void *base_addr, uint64_t size, - size_t buf_size, size_t buf_align, - int buf_type); + odp_buffer_pool_param_t *params, + odp_buffer_pool_init_t *init_params); +/** + * Destroy a buffer pool previously created by odp_buffer_pool_create() + * + * @param[in] pool Handle of the buffer pool to be destroyed + * + * @return 0 on Success, -1 on Failure. + * + * @note This routine destroys a previously created buffer pool. + * Attempts to destroy a predefined buffer pool will be rejected + * since the application did not create it. Results are undefined if + * an attempt is made to destroy a buffer pool that contains allocated + * or otherwise active buffers. + */ +int odp_buffer_pool_destroy(odp_buffer_pool_t pool); /** * Find a buffer pool by name * - * @param name Name of the pool + * @param[in] name Name of the pool * * @return Buffer pool handle, or ODP_BUFFER_POOL_INVALID if not found. */ odp_buffer_pool_t odp_buffer_pool_lookup(const char *name); +/** + * Get the next buffer pool from its predecessor + * + * @param[in] pool Buffer pool handle + * @param[out] name Name of the pool + * (max ODP_BUFFER_POOL_NAME_LEN - 1 chars) + * @param[out] udata_size Size of user metadata used by this pool. + * @param[out] params Output structure for pool parameters + * @param[out] predef Predefined (1) or Created (0). + * + * @return Buffer pool handle + * + * @note This routine returns the abstract identifier + * (odp_buffer_pool_t) of a buffer pool and is used to obtain the list + * of all buffer pools. In this manner an application can discover + * both application created and implementation predefined buffer pools + * and their characteristics. The input specifies the previous buffer + * pool identifier. There are three use cases for this + * routine: + * + * -# If the input pool is ODP_BUFFER_POOL_START then the buffer pool handle + * returned is that of the first buffer pool in the list. + * ODP_BUFFER_POOL_NULL MAY be used as a synonym for ODP_BUFFER_POOL_START + * if desired. + * + * -# If the input pool is not the last element in the buffer pool list + * then the buffer pool handle of the next buffer pool following pool is + * returned. + * + * -# If the input pool is the buffer pool handle of the last buffer pool + * in the list then ODP_BUFFER_POOL_NULL is returned. + * + * @note Returned with the buffer pool handle is the name of the pool as + * well as its dimensions, type of buffers it contains, and a flag + * that says whether the pool is predefined or was created by the + * application. Note that the buf_size reported for a buffer pool is + * simply the declared expected size of the buffers in the pool and + * serves only to estimate the total amount of application data that + * can be stored in the pool. Actual sizes of individual buffers + * within the pool are dynamic and variable since physical buffer + * segments MAY be aggregated to create buffers of arbitrary size (up + * to the pool memory limits). Note that for predefined buffer pools, + * some implementations MAY return the physical segment counts and + * sizes used to construct the pool as output of this routine. + */ +odp_buffer_pool_t odp_buffer_pool_next(odp_buffer_pool_t pool, + char *name, size_t *udata_size, + odp_buffer_pool_param_t *params, + int *predef); /** - * Print buffer pool info + * Get the high/low watermarks for a buffer pool + * + * @param[in] pool Handle of the buffer pool + * @param[out] high_wm The high water mark of the designated buffer pool + * @param[out] low_wm The low water mark of the designated buffer pool * - * @param pool Pool handle + * @return Success or ODP_BUFFER_POOL_INVALID if pool is unknown + * or ODP_BUFFER_POOL_NO_WATERMARKS if no watermarks + * are associated with this buffer pool. * + * @note This routine gets the high/low watermarks associated with a + * given buffer pool. If the buffer pool does not have or support + * watermarks then an error will be returned and both high_wm and + * low_wm will be unchanged. + * + * @note It is RECOMMENDED that buffer pools of all types support the + * setting and getting of watermarks for use in flow control + * processing. Watermarks are designed to trigger flow control + * actions based on utilization levels of a buffer pool. When the + * number of free buffers in the buffer pool hits the configured low + * watermark for the pool, the pool asserts a low watermark condition + * and an implementation-defined action in response to this condition + * is triggered. Once in a low watermark state, the condition is + * maintained until the number of free buffers reaches the configured + * high watermark. At this point the low watermark condition is + * deasserted and normal pool processing resumes. Having separate high + * and low watermarks permits configurable hysteresis to avoid jitter + * in handling transient buffer shortages in the pool. + * + * @note In general, two types of actions are common. The first is to + * control Random Early Detection (RED) or Weighted RED (WRED) + * processing for the pool, while the second is to control IEEE + * 802.1Qbb priority-based flow control (PFC) processing for so-called + * “lossless Ethernet” support. The use of watermarks for flow control + * processing is most often used for pools containing packets and this + * is discussed in further detail in the Class of Service (CoS) ODP + * Classification APIs. */ -void odp_buffer_pool_print(odp_buffer_pool_t pool); +int odp_buffer_pool_watermarks(odp_buffer_pool_t pool, + size_t *high_wm, size_t *low_wm); +/** + * Set the high/low watermarks for a buffer pool + * + * @param[in] pool Handle of the buffer pool + * @param[in] high_wm The high water mark of the designated buffer pool + * @param[in] low_wm The low water mark of the designated buffer pool + * + * @return Success or ODP_BUFFER_POOL_INVALID if pool is unknown + * or ODP_BUFFER_POOL_NO_WATERMARKS if no watermarks + * are associated with this buffer pool. + * + * @note This routine sets the high/low watermarks associated with a + * specified buffer pool. If the buffer pool does not support + * watermarks then errno ODP_BUFFER_POOL_NO_WATERMARKS is set and no + * function is performed. + */ +int odp_buffer_pool_set_watermarks(odp_buffer_pool_t pool, + size_t high_wm, size_t low_wm); /** - * Buffer alloc + * Get the headroom for a packet buffer pool * - * The validity of a buffer can be cheked at any time with odp_buffer_is_valid() - * @param pool Pool handle + * @param[in] pool Handle of the buffer pool * - * @return Buffer handle or ODP_BUFFER_INVALID + * @return The headroom for the pool. If the pool is invalid, + * returns -1 and errno set to ODP_BUFFER_POOL_INVALID. + * + * @note This routine returns the headroom associated with the buffer + * pool. This is the headroom that will be set for packets allocated + * from this packet buffer pool. */ -odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool); +size_t odp_buffer_pool_headroom(odp_buffer_pool_t pool); +/** + * Set the headroom for a packet buffer pool + * + * @param[in] pool Handle of the buffer pool + * @param[in] hr The headroom for the pool + * + * @return 0 on Success or -1 on error. For errors, errno set to + * ODP_BUFFER_POOL_INVALID if pool is unknown + * or ODP_INVALID_RANGE if hr exceeds + * ODP_PACKET_MAX_HEADROOM + * + * @note This routine sets the default headroom associated with + * buffers allocated from this packet pool. Note that headroom is a + * per-packet attribute. The headroom associated with the buffer pool + * is the default headroom to assign to a packet allocated from this + * buffer pool by the odp_packet_alloc() routine By contrast, the + * odp_cos_set_headroom() classification API sets the default headroom + * to assign to a packet by the classifier for packets matching a + * particular Class of Service (CoS). The allowable range of + * supported headroom sizes is subject to the ODP_PACKET_MAX_HEADROOM + * limit defined by the implementation. The valid range for hr is + * 0..ODP_PACKET_MAX_HEADROOM. + * + * @note Headroom serves two purposes. The first is to reserve a prefix area + * of buffers that will hold packets for header expansion. Applications + * can add headers to packets via the odp_packet_push_headroom() to make + * headroom space available for new headers. + * + * @note The second use of headroom is to control packet alignment + * within buffers. The buffers in a buffer pool MUST be "naturally + * aligned" for addressing purposes by the implementation. It is + * RECOMMENDED that this be cache aligned. Because a standard + * Ethernet header is 14 octets in length, it is usually convenient to + * offset packets by 2 octets so that the following Layer 3 header + * (typically IPv4 or IPv6) is naturally aligned on a word boundary. + * So applications SHOULD specify an offset that reflects the packet + * alignment they wish to see. For example, a call like + * odp_buffer_pool_set_headroom(pool, hr+2); would force packets to by + * offset by two bytes to achieve the desired Layer 3 alignment while + * also reserving hr bytes of headroom for application use. + * + * @note Implementations are free to make more headroom than is + * requested available to the packet, providing that in doing so they + * preserve the implied alignment in the caller's specified headroom + * value. Doing so requires that implementations MUST extend the + * requested headroom only by integral multiples of 8 bytes. So in + * reponse to a request for hr bytes of headroom, implementations MAY + * increase this to (n*8)+hr bytes. + * + * @note Note also that if the buffer pool is unsegmented, the + * specified headroom will subtract from the preallocated segments + * that comprise the pool. Applications SHOULD take this into account + * when sizing unsegmented buffer pools. + * + * @note Specifying a new headroom for an existing buffer pool does + * not affect the headroom associated with existing packets previously + * allocated from it. The buffer pool headroom setting only affects + * new packets allocated from the pool. + */ +int odp_buffer_pool_set_headroom(odp_buffer_pool_t pool, size_t hr); /** - * Buffer free + * Get the tailroom for a packet buffer pool * - * @param buf Buffer handle + * @param[in] pool Handle of the buffer pool * + * @return The tailroom for the pool. If the pool is invalid, + * returns -1 and errno set to ODP_BUFFER_POOL_INVALID. + * + * @note This routine returns the tailroom associated with buffers + * allocated from a packet buffer pool. */ -void odp_buffer_free(odp_buffer_t buf); +size_t odp_buffer_pool_tailroom(odp_buffer_pool_t pool); + +/** + * Set the tailroom for a packet buffer pool + * + * @param[in] pool Handle of the buffer pool + * @param[in] tr The tailroom for the pool + * + * @return 0 on Success or -1 on error. For errors, errno set to + * ODP_BUFFER_POOL_INVALID if pool is unknown + * or ODP_INVALID_RANGE if hr exceeds + * ODP_PACKET_MAX_TAILROOM + * + * @note This routine sets the tailroom associated with buffers + * allocated from a packet pool. The allowable range of supported + * tailroom sizes is subject to the ODP_PACKET_MAX_TAILROOM limit + * defined by the implementation. The valid range for tr is + * 0..ODP_PACKET_MAX_TAILROOM. + * + * @note Implementations are free to make more tailroom than is + * requested available to the packet. This call simply specifies the + * minimum tailroom that the application needs. + * + * @note Note also that if the buffer pool is unsegmented, the specified + * tailroom will subtract from the preallocated segments that comprise + * the pool. Applications SHOULD take this into account when sizing + * unsegmented buffer pools. + * + * @note Specifying a new tailroom for an existing buffer pool does + * not affect the tailroom associated with packets previously + * allocated from it. The buffer pool tailroom setting only affects + * new packets allocated from the pool. + */ +int odp_buffer_pool_set_tailroom(odp_buffer_pool_t pool, size_t tr); +/** + * Print buffer pool info + * + * @param[in] pool Pool handle + * + * @note This is a diagnostic routine that prints statistics regarding + * the specified buffer pool to the ODP LOG. Its output is + * implementation-defined. + */ +void odp_buffer_pool_print(odp_buffer_pool_t pool); /** - * Buffer pool of the buffer + * Buffer alloc * - * @param buf Buffer handle + * The validity of a buffer can be cheked at any time with odp_buffer_is_valid() + * @param[in] pool Pool handle + * + * @return Buffer handle or ODP_BUFFER_INVALID + */ +odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool); + +/** + * Buffer free + * + * @param[in] buf Buffer handle * - * @return Buffer pool the buffer was allocated from */ -odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf); +void odp_buffer_free(odp_buffer_t buf); /** * @} diff --git a/platform/linux-generic/include/api/odp_config.h b/platform/linux-generic/include/api/odp_config.h index 906897c..65cc5b5 100644 --- a/platform/linux-generic/include/api/odp_config.h +++ b/platform/linux-generic/include/api/odp_config.h @@ -49,6 +49,12 @@ extern "C" { #define ODP_CONFIG_PKTIO_ENTRIES 64 /** + * Packet processing limits + */ +#define ODP_CONFIG_BUF_SEG_SIZE (512*3) +#define ODP_CONFIG_BUF_MAX_SIZE (ODP_CONFIG_BUF_SEG_SIZE*7) + +/** * @} */ diff --git a/platform/linux-generic/include/api/odp_packet.h b/platform/linux-generic/include/api/odp_packet.h index 688e047..2d031ea 100644 --- a/platform/linux-generic/include/api/odp_packet.h +++ b/platform/linux-generic/include/api/odp_packet.h @@ -8,7 +8,262 @@ /** * @file * - * ODP packet descriptor + * @par ODP Packet Management APIs + * Described here are the fundamental + * concepts and supporting APIs of the ODP Packet Management routines. + * All conforming ODP implementations MUST provide these data types + * and APIs. Latitude in how routines MAY be implemented are noted + * when applicable. + * + * @par Inherited and New Concepts + * As a type of buffer, packets are + * allocated from its containing buffer pool created via + * odp_buffer_pool_create() with a buffer type of + * ODP_BUFFER_TYPE_PACKET. Packets are referenced by an abstract + * odp_packet_t handle defined by each implementation. + * + * @par + * Packet objects are normally created at ingress when they arrive + * at a source odp_pktio_t and are received by an application either + * directly or (more typically) via a scheduled receive queue. They + * MAY be implicitly freed when they are transmitted to an output + * odp_pktio_t via an associated transmit queue, or freed directly via + * the odp_packet_free() API. + * + * @par + * Packets contain additional system metadata beyond those found + * in buffers that is populated by the parse function of the ODP + * classifier. See below for a discussion of this metadata and the + * accessor functions provided for application reference to them. + * + * @par + * Occasionally an application may originate a packet itself, + * either de novo or by deriving it from an existing packet, and APIs + * are provided to assist in these cases as well. Application-created + * packets can be recycled back through a loopback interface to reparse + * and reclassify them, or the application can explicitly re-invoke the + * parser or do its own parsing as desired. This can also occur as a + * result of packet decryption or decapsulation when dealing with + * ingress tunnels. See the ODP classification design document for + * further details. Additionally, the metadata set as a result of + * parsing MAY be directly set by the application as needed. + * + * @par Packet Structure and Concepts + * A packet consists of a sequence + * of octets conforming to an architected format, such as Ethernet, + * that can be received and transmitted via the ODP pktio abstraction. + * Packets have a length, which is the number of bytes in the packet. + * Packet data in ODP is referenced to via offsets since these reflect + * the logical contents and structure of a packet independent of how + * particular ODP implementations store that data. + * + * @par + * These concepts are shown in the following diagram: + * + * @image html packet.png "ODP Packet Structure" width=\textwidth + * @image latex packet.eps "ODP Packet Structure" width=\textwidth + * + * @par + * Packet data consists of zero or more headers, followed by 0 or + * more bytes of payload, followed by zero or more trailers. + * + * @par + * Packet Segments and Addressing Network SoCs use various + * methods and techniques to store and process packets efficiently. + * These vary considerably from platform to platform, so to ensure + * portability across them ODP adopts certain conventions for + * referencing packets. + * + * @par + * ODP APIs use a handle of type odp_packet_t to refer to packet + * objects. Associated with packets are various bits of system meta + * data that describe the packet. By referring to the metadata, ODP + * applications accelerate packet processing by minimizing the need to + * examine packet data. This is because the packet metadata is + * populated by parsing and classification functions that are coupled + * to ingress processing that occur prior to a packet being presented + * to the application via the ODP scheduler. + * + * @par + * When an ODP implementation needs to examine the contents of a + * packet, it requests addressability to it via a mapping API that + * makes the packet (or a contiguously addressable segment of it) + * available for coherent access by the application. While ODP + * applications MAY request that packets be stored in unsegmented + * buffer pools, not all platforms supporting ODP are able to provide + * contiguity guarantees for packets and as a result such requests may + * either fail or else result in degraded performance compared to + * native operation. + * + * @par + * Instead, ODP applications SHOULD assume that the underlying + * implementation stores packets in segments of implementation-defined + * and managed size. These represent the contiguously addressable + * portions of a packet that the application may refer to via normal + * memory accesses. ODP provides APIs that allow applications to + * operate on packet segments in an efficient and portable manner as + * needed. By combining these with the metadata provided for + * packets, ODP applications can operate in a fully + * platform-independent manner while still achieving optimal + * performance across the range of platforms that support ODP. + * + * @par + * The use of segments for packet addressing and their + * relationship to metadata is shown in this diagram: + * + * @image html segments.png "ODP Packet Segmentation Structure" width=\textwidth + * @image latex segments.eps "ODP Packet Segmentation Structure" width=\textwidth + * + * @par + * The packet metadata is set during parsing and identifies the + * starting offsets of the various headers contained in the packet. + * The packet itself is physically stored as a sequence of segments + * that are managed by the ODP implementation. Segment 0 is the first + * segment of the packet and is where the packet’s headroom and + * headers typically reside. Depending on the length of the packet, + * additional segments may be part of the packet and contain the + * remaining packet payload and tailroom. The application need not + * concern itself with segments except that when the application + * requires addressability to a packet it understands that + * addressability is provided on a per-segment basis. So, for + * example, if the application makes a call like + * odp_packet_payload_map() to obtain addressability to the packet + * payload, the returned seglen from that call is the number of bytes + * from the start of the payload that are contiguously addressable to + * the application from the returned payload address. This is because + * the following byte occupies a different segment that may be stored + * elsewhere. To obtain access to those bytes, the application simply + * requests addressability to that offset and it will be able to + * address the payload bytes that occupy segment 1, etc. Note that + * the returned seglen for any mapping call is always the lesser of + * the remaining packet length and the size of its containing segment. + * So a mapping request for segment 2, for example, would return a + * seglen that extends only to the end of the packet since the + * remaining bytes are part of the tailroom reserved for the packet + * and are not usable by the application until made available to it by + * an appropriate API call. + * + * @par Headroom and Tailroom + * Because data plane applications will + * often manipulate packets by adding or removing headers and/or + * trailers, ODP implementations MUST support the concepts of headroom + * and tailroom for packets. How implementations choose to support + * these concepts is unspecified by ODP. + * + * @par + * Headroom is an area that logically prepends the start of a + * packet and is reserved for the insertion of additional header + * information to the front of a packet. Typical use of headroom + * might be packet encapsulation as part of tunnel operations. + * Tailroom is a similar area that logically follows a packet reserved + * for the insertion of trailer information at the end of a packet. + * Typical use of tailroom might be in payload manipulation or in + * additional checksum insertion. The idea behind headroom and + * tailroom is to support efficient manipulation of packet headers + * and/or trailers by preallocating buffer space and/or metadata to + * support the insertion of packet headers and/or trailers while + * avoiding the overhead of more general split/join buffer operations. + * + * @par + * Note that not every application or communication protocol will + * need these and ODP implementations MAY impose restrictions or + * modifications on when and how these capabilities are used. For + * example, headroom MAY indicate the byte offset into a packet buffer + * at which packet data is received from an associated odp_pktio_t. + * An implementation MAY add to the requested headroom or tailroom for + * implementation-defined alignment or other reasons. Note also that + * implementations MUST NOT assume that headroom and/or tailroom is + * necessarily contiguous with any other segment of the packet unless + * the underlying buffer pool the packet has been allocated from has + * been explicitly defined as unsegmented. See the ODP Buffer API + * design for discussion of segmented vs. unsegmented buffers and + * their implementation models. This convention is observed + * automatically because every mapping call returns a corresponding + * seglen that tells the application the number of bytes it may + * reference from the address returned by that call. Applications + * MUST observe these limits to avoid programming errors and + * portability issues. + * + * @par Packet Parsing and Inflags + * ODP packets are intended to be + * processed by the ODP Classifier upon receipt. As part of its + * processing, the classifier parses information from the packet + * headers and makes this information available as system metadata so + * that applications using ODP do not have to reference packets or + * their headers directly for most processing. The set of headers + * supported by the ODP parse functions MUST include at minimum the + * following: + * + * - Layer 2: ARP, SNAP (recognition), VLAN (C-Tag and S-Tag) + * - Layer 3: IPv4, IPv6 + * - Layer 4: TCP, UDP, ICMP, ICMPv6, IPsec (ESP and AH) + * + * @par + * Other protocols MAY be supported, however ODP v1.0 does not + * define APIs for referencing them. + * + * @par + * Parsing results are stored as metadata associated with the + * packet. These include various precomputed offsets used for direct + * access to parsed headers as well as indicators of packet contents + * that are collectively referred to as inflags. Inflags are packet + * metadata that may be inspected or set via accessor functions as + * described below. Setters are provided to enable applications that + * create or modify packet headers to update these attributes + * efficiently. Applications that use them take responsibility for + * ensuring that the results are consistent. ODP itself does not + * validate an inflag setter to ensure that it reflects actual packet + * contents. Applications that wish this additional assurance should + * request an explicit packet reparse. + * + * @par Packet Outflags + * Packet transmission options are controlled by + * packet metadata collectively referred to as outflags. An + * application sets these to request various services related to + * packet transmission. + * + * @par + * Note: The outflags controlling checksum offload processing are + * overrides. That is, they have no effect unless they are set + * explicitly by the application. By default, checksum offloads are + * controlled by the corresponding settings of the odp_pktio_t through + * which a packet is transmitted. The purpose of these bits is to + * permit this offload processing to be overridden on a per-packet + * basis. Note that not every implementation may support such + * override capabilities, which is why the setters here return a + * success/failure indicator. + * + * @par Packet Headroom and Tailroom Routines + * Data plane applications frequently manipulate the headers and trailers + * associated with packets. These operations involve either stripping + * headers or trailers from packets or inserting new headers or + * trailers onto them. To enable this manipulation, ODP provides the + * notion of headroom and tailroom, as well as a set of APIs that + * enable their efficient manipulation. + * + * @par + * Headroom is a set of bytes that logically precede the start of + * a packet, enabling additional headers to be created that become + * part of the packet. Similarly, tailroom is a set of bytes that + * logically follow the end of a packet, enabling additional payload + * and/or trailers to be created that become part of the packet. Both + * headroom and tailroom are metadata associated with packets, and + * are assigned at packet creation. + * + * @par + * Packet headroom and tailroom is manipulated by the following + * routines that MUST be provided by conforming ODP implementations. + * These operations define push and pull operations. The convention + * is that push operations move away from packet data while pull + * operations move towards packet data. Alternately, push operations + * add to packet data, while pull operations remove packet data. + * + * @par + * These concepts are shown as operations on the packet diagram + * we saw previously: + * + * @image html hrtr.png "Headroom and Tailroom Manipulation" width=\textwidth + * @image latex hrtr.eps "Headroom and Tailroom Manipulation" width=\textwidth */ #ifndef ODP_PACKET_H_ @@ -21,430 +276,1978 @@ extern "C" { #include /** @defgroup odp_packet ODP PACKET - * Operations on a packet. + * * @{ */ /** - * ODP packet descriptor + * Convert a buffer handle to a packet handle + * + * @param[in] buf Buffer handle + * + * @return Packet handle + * + * @note This routine converts a buffer handle to a packet handle. + * Only meaningful if buffer is of type ODP_BUFFER_TYPE_PACKET. + * Results are undefined otherwise. */ -typedef odp_buffer_t odp_packet_t; +odp_packet_t odp_packet_from_buffer(odp_buffer_t buf); -/** Invalid packet */ -#define ODP_PACKET_INVALID ODP_BUFFER_INVALID +/** + * Convert a packet handle to a buffer handle + * + * @param[in] pkt Packet handle + * + * @return Buffer handle + * + * @note This routine converts a packet handle to a buffer handle. + * This routine always succeeds (assuming pkt is a valid packet + * handle) since all packets are buffers. + */ +odp_buffer_t odp_packet_to_buffer(odp_packet_t pkt); -/** Invalid offset */ -#define ODP_PACKET_OFFSET_INVALID ((uint32_t)-1) +/** + * Obtain buffer pool handle of a packet + * + * @param[in] pkt Packet handle + * + * @return Buffer pool the packet was allocated from + * + * @note This routine is an accessor function that returns the handle + * of the buffer pool containing the referenced packet. + */ +odp_buffer_pool_t odp_packet_pool(odp_packet_t pkt); +/** + * Packet alloc + * + * @param[in] pool Pool handle for a pool of type ODP_BUFFER_TYPE_PACKET + * + * @return Packet handle or ODP_PACKET_INVALID + * + * @note This routine is used to allocate a packet from a buffer pool + * of type ODP_BUFFER_TYPE_PACKET. The returned odp_packet_t is an + * opaque handle for the packet that can be used in further calls to + * manipulate the allocated packet. The value ODP_PACKET_INVALID is + * returned if the request cannot be satisfied. The length of the + * allocated packet is set to 0. + * + * @note If non-persistent user metadata is associated with the + * underlying buffer that contains the packet, the buf_init() routine + * specified as part of the containing buffer pool will be called as + * part of buffer allocation to enable the application to initialize + * the user metadata associated with it. + */ +odp_packet_t odp_packet_alloc(odp_buffer_pool_t pool); /** - * ODP packet segment handle + * Allocate a packet from a buffer pool of a specified length + * + * @param[in] pool Pool handle + * @param[in] len Length of packet requested + * + * @return Packet handle or ODP_PACKET_INVALID + * + * @note This routine is used to allocate a packet of a given length + * from a packet buffer pool. The returned odp_packet_t is an opaque + * handle for the packet that can be used in further calls to + * manipulate the allocated packet. The returned buffer is + * initialized as an ODP packet and with the length set to the + * requested len. The caller will then initialize the packet with + * headers and payload as needed. This call itself does not + * initialize packet contents or the metadata that would be present + * following a packet parse. */ -typedef int odp_packet_seg_t; +odp_packet_t odp_packet_alloc_len(odp_buffer_pool_t pool, size_t len); -/** Invalid packet segment */ -#define ODP_PACKET_SEG_INVALID -1 +/** + * Packet free + * + * @param[in] pkt Handle of the packet to be freed + * + * @note This routine is used to return a packet back to its + * containing buffer pool. Results are undefined if an application + * attempts to reference a packet after it is freed. + */ +void odp_packet_free(odp_packet_t pkt); /** - * ODP packet segment info + * Initialize a packet + * + * @param[in] pkt Handle of the packet to be initialized + * + * @note This routine is called following packet allocation to + * initialize the packet metadata and internal structure to support + * packet operations. Note that this function is performed whenever a + * packet is allocated so it would only be used if an application + * wished to re-initialize a packet to permit it to discard whatever + * previous contents existed and start a fresh packet without having + * to free and re-allocate the packet. Re-initializing a packet + * resets its headroom and tailroom to their default values (from the + * containing packet pool) and sets the packet length to 0. */ -typedef struct odp_packet_seg_info_t { - void *addr; /**< Segment start address */ - size_t size; /**< Segment maximum data size */ - void *data; /**< Segment data address */ - size_t data_len; /**< Segment data length */ -} odp_packet_seg_info_t; +void odp_packet_init(odp_packet_t pkt); +/** + * Obtain buffer pool handle of a packet + * + * @param[in] pkt Packet handle + * + * @return Buffer pool the packet was allocated from + * + * @note This routine is an accessor function that returns the handle + * of the buffer pool containing the referenced packet. + */ +odp_buffer_pool_t odp_packet_pool(odp_packet_t pkt); /** - * Initialize the packet + * Get the headroom available for a packet * - * Needs to be called if the user allocates a packet buffer, i.e. the packet - * has not been received from I/O through ODP. + * @param[in] pkt Packet handle * - * @param pkt Packet handle + * @return Headroom available for this packet, in bytes. + * + * @note This routine returns the current headroom available for a + * buffer. The initial value for this is taken either from the + * containing buffer pool (for explicit packet allocation) or from the + * Class of Service (CoS) on packet reception. It is adjusted + * dynamically by the odp_packet_push_head() and + * odp_packet_pull_head() routines. */ -void odp_packet_init(odp_packet_t pkt); +size_t odp_packet_headroom(odp_packet_t pkt); /** - * Convert a buffer handle to a packet handle + * Get the tailroom available for a packet * - * @param buf Buffer handle + * @param[in] pkt Packet handle * - * @return Packet handle + * @return Tailroom available for this packet, in bytes. + * + * @note This routine returns the current tailroom available for a + * buffer. The initial value for this is taken either from the + * containing buffer pool. It is adjusted dynamically by the + * odp_packet_push_tail() and odp_packet_pull_tail() routines. */ -odp_packet_t odp_packet_from_buffer(odp_buffer_t buf); +size_t odp_packet_tailroom(odp_packet_t pkt); /** - * Convert a packet handle to a buffer handle + * Get packet length * - * @param pkt Packet handle + * @param[in] pkt Packet handle * - * @return Buffer handle + * @return Packet length in bytes + * + * @note This routine is an accessor function that returns the length + * (in bytes) of a packet. This is the total number of octets that + * would transmit for the packet, not including the Ethernet Frame + * Check Sequence (FCS), and includes all packet headers as well as + * payload. Results are undefined if the supplied pkt does not + * specify a valid packet. Note that packet length will change in + * response to headroom/tailroom and/or split/join operations. As a + * result, this attribute does not have a setter accessor function. */ -odp_buffer_t odp_packet_to_buffer(odp_packet_t pkt); +size_t odp_packet_len(odp_packet_t pkt); /** - * Set the packet length + * Get address and size of user metadata associated with a packet + * + * @param[in] pkt Packet handle + * @param[out] udata_size Number of bytes of user metadata available + * at the returned address * - * @param pkt Packet handle - * @param len Length of packet in bytes + * @return Address of the user metadata for this packet + * or NULL if the buffer has no user metadata. + * + * @note This routine returns the address of the user metadata + * associated with an ODP packet. This enables the caller to read or + * write the user metadata associated with the buffer. The caller + * MUST honor the returned udata_size in referencing this storage. */ -void odp_packet_set_len(odp_packet_t pkt, size_t len); +void *odp_packet_udata(odp_packet_t pkt, size_t *udata_size); /** - * Get the packet length + * Get address of user metadata associated with a packet * - * @param pkt Packet handle + * @param[in] pkt Packet handle * - * @return Packet length in bytes + * @return Address of the user metadata for this packet + * or NULL if the buffer has no user metadata. + * + * @note This routine returns the address of the user metadata + * associated with an ODP packet. This enables the caller to read or + * write the user metadata associated with the packet. This routine + * is intended as a fast-path version of odp_packet_udata() for + * callers that only require the address of the user metadata area + * associated with the packet. This routine assumes that the caller + * already knows and will honor the size limits of this area. */ -size_t odp_packet_get_len(odp_packet_t pkt); +void *odp_packet_udata_addr(odp_packet_t pkt); /** - * Set packet user context + * Tests if packet is valid * - * @param buf Packet handle - * @param ctx User context + * @param[in] pkt Packet handle * + * @return 1 if valid, otherwise 0 + * + * @note This routine tests whether a packet is valid. A packet is + * valid if the packet identified by the supplied odp_packet_t exists + * and has been allocated. */ -void odp_packet_set_ctx(odp_packet_t buf, const void *ctx); +int odp_packet_is_valid(odp_packet_t pkt); /** - * Get packet user context + * Tests if packet is segmented + * + * @param[in] pkt Packet handle * - * @param buf Packet handle + * @return 1 if packet has more than one segment, otherwise 0 * - * @return User context + * @note This routine tests whether a packet is segmented. Logically + * equivalent to testing whether odp_packet_segment_count(pkt) > 1, + * but may be more efficient in some implementations. */ -void *odp_packet_get_ctx(odp_packet_t buf); +int odp_packet_is_segmented(odp_packet_t pkt); /** - * Packet buffer start address + * Print packet metadata to ODP Log * - * Returns a pointer to the start of the packet buffer. The address is not - * necessarily the same as packet data address. E.g. on a received Ethernet - * frame, the protocol header may start 2 or 6 bytes within the buffer to - * ensure 32 or 64-bit alignment of the IP header. + * @param[in] pkt Packet handle + * + * @note This routine is used for debug purposes to print the metadata + * associated with a packet to the ODP log. This routine is OPTIONAL + * and MAY be treated as a no-op if the function is not available or + * if the supplied odp_packet_t is not valid. + */ +void odp_packet_print(odp_packet_t pkt); + +/** + * Parse a packet and set its metadata. + * + * @param[in] pkt Packet handle of packet to be parsed + * + * @return 1 if packet has any parse errors, 0 otherwise + * + * @note This routine requests that the specified packet by parsed and + * the metadata associated with it be set. The return value + * indicates whether the parse was successful or if any parse errors + * were encountered. The intent of this routine is to allow + * applications that construct or modify packets to force an + * implementation-provided re-parse to set the relevant packet meta + * data. As an alternative, the application is free to set these + * individually as it desires with appropriate setter functions, + * however in this case it is the application’s responsibility to + * ensure that they are set consistently as no error checking is + * performed by the setters. Calling odp_packet_parse(), by contrast, + * guarantees that they will be set properly to reflect the actual + * contents of the packet. + */ +int odp_packet_parse(odp_packet_t pkt); + +/** + * Check for packet errors * - * Use odp_packet_l2(pkt) to get the start address of a received valid frame - * or odp_packet_data(pkt) to get the current packet data address. + * Checks all error flags at once. * - * @param pkt Packet handle + * @param[in] pkt Packet handle * - * @return Pointer to the start of the packet buffer + * @return 1 if packet has errors, 0 otherwise * - * @see odp_packet_l2(), odp_packet_data() + * @note This routine is a summary routine that says whether the + * referenced packet contains any errors. If odp_packet_error() is 0 + * then the packet is well-formed. */ -uint8_t *odp_packet_addr(odp_packet_t pkt); +int odp_packet_error(odp_packet_t pkt); /** - * Packet data address + * Control indication of packet error. + * + * @param[in] pkt Packet handle + * @param[in] val Value to set for this bit (0 or 1). * - * Returns the current packet data address. When a packet is received from - * packet input, the data address points to the first byte of the packet. + * @note This routine is used to set the error flag for a packet. + * Note that while error is a summary bit, at present ODP does not + * define any error detail bits. + */ +void odp_packet_set_error(odp_packet_t pkt, int val); + +/** + * Examine packet reference count + * + * @param[in] pkt Packet handle + * + * @return reference count of the packet + * + * @note This routine examines the reference count associated with a + * packet. The reference count is used to control when a packet is + * freed. When initially allocated, the refcount for a packet is set + * to 1. When a packet is transmitted its refcount is decremented and + * if the refcount is 0 then the packet is freed by the transmit + * function of the odp_pktio_t that transmits it. If the refcount is + * greater than zero then the packet is not freed and instead is + * returned to the application for further processing. Note that a + * packet refcount is an unsigned integer and can never be less than + * 0. + */ +unsigned int odp_packet_refcount(odp_packet_t pkt); + +/** + * Increment a packet’s refcount. * - * @param pkt Packet handle + * @param[in] pkt Packet handle + * @param[in] val Value to increment refcount by * - * @return Pointer to the packet data + * @return The packet refcount following increment * - * @see odp_packet_l2(), odp_packet_addr() + * @note This routine is used to increment the refcount for a packet + * by a specified amount. */ -uint8_t *odp_packet_data(odp_packet_t pkt); +unsigned int odp_packet_incr_refcount(odp_packet_t pkt, unsigned int val); /** - * Get pointer to the start of the L2 frame + * Decrement a packet’s refcount. * - * The L2 frame header address is not necessarily the same as the address of the - * packet buffer, see odp_packet_addr() + * @param[in] pkt Packet handle + * @param[in] val Value to decrement refcount by + * + * @return The packet refcount following decrement + * + * @note This routine is used to decrement the refcount for a packet + * by a specified amount. The refcount will never be decremented + * below 0 regardless of the specified val. + */ +unsigned int odp_packet_decr_refcount(odp_packet_t pkt, unsigned int val); + +/** + * Check for L2 header, e.g., Ethernet * - * @param pkt Packet handle + * @param[in] pkt Packet handle * - * @return Pointer to L2 header or NULL if not found + * @return 1 if packet contains a valid & known L2 header, 0 otherwise * - * @see odp_packet_addr(), odp_packet_data() + * @note This routine indicates whether the referenced packet contains + * a valid Layer 2 header. */ -uint8_t *odp_packet_l2(odp_packet_t pkt); +int odp_packet_inflag_l2(odp_packet_t pkt); /** - * Return the byte offset from the packet buffer to the L2 frame + * Control indication of Layer 2 presence. + * + * @param[in] pkt Packet handle * - * @param pkt Packet handle + * @param val[in] 1 if packet contains a valid & known L2 header, 0 otherwise * - * @return L2 byte offset or ODP_PACKET_OFFSET_INVALID if not found + * @note This routine sets whether the referenced packet contains a + * valid Layer 2 header. */ -size_t odp_packet_l2_offset(odp_packet_t pkt); +void odp_packet_set_inflag_l2(odp_packet_t pkt, int val); /** - * Set the byte offset to the L2 frame + * Check for L3 header, e.g. IPv4, IPv6 + * + * @param[in] pkt Packet handle + * + * @return 1 if packet contains a valid & known L3 header, 0 otherwise * - * @param pkt Packet handle - * @param offset L2 byte offset + * @note This routine indicates whether the referenced packet contains + * a valid Layer 3 header. */ -void odp_packet_set_l2_offset(odp_packet_t pkt, size_t offset); +int odp_packet_inflag_l3(odp_packet_t pkt); +/** + * Control indication of L3 header, e.g. IPv4, IPv6 + * + * @param[in] pkt Packet handle + * + * @param[in] val 1 if packet contains a valid & known L3 header, 0 otherwise + * + * @note This routine sets whether the referenced packet contains a + * valid Layer 3 header. + */ +void odp_packet_set_inflag_l3(odp_packet_t pkt, int val); /** - * Get pointer to the start of the L3 packet + * Check for L4 header, e.g. UDP, TCP, (also ICMP) * - * @param pkt Packet handle + * @param[in] pkt Packet handle * - * @return Pointer to L3 packet or NULL if not found + * @return 1 if packet contains a valid & known L4 header, 0 otherwise * + * @note This routine indicates whether the referenced packet contains + * a valid Layer 4 header. */ -uint8_t *odp_packet_l3(odp_packet_t pkt); +int odp_packet_inflag_l4(odp_packet_t pkt); /** - * Return the byte offset from the packet buffer to the L3 packet + * Control indication of L4 header, e.g. UDP, TCP, (also ICMP) * - * @param pkt Packet handle + * @param pkt[in] Packet handle + * @param val[in] 1 if packet contains a valid & known L4 header, 0 otherwise * - * @return L3 byte offset or ODP_PACKET_OFFSET_INVALID if not found + * @note This routine sets whether the referenced packet contains a + * valid Layer 4 header. */ -size_t odp_packet_l3_offset(odp_packet_t pkt); +void odp_packet_set_inflag_l4(odp_packet_t pkt, int val); /** - * Set the byte offset to the L3 packet + * Check for Ethernet header * - * @param pkt Packet handle - * @param offset L3 byte offset + * @param[in] pkt Packet handle + * + * @return 1 if packet contains a valid eth header, 0 otherwise + * + * @note This routine indicates whether the referenced packet contains + * a valid Ethernet header. */ -void odp_packet_set_l3_offset(odp_packet_t pkt, size_t offset); +int odp_packet_inflag_eth(odp_packet_t pkt); +/** + * Control indication of Ethernet header + * + * @param[in] pkt Packet handle + * + * @return 1 if packet contains a valid eth header, 0 otherwise + * + * @note This routine sets whether the referenced packet contains a + * valid Ethernet header. + */ +void odp_packet_set_inflag_eth(odp_packet_t pkt, int val); /** - * Get pointer to the start of the L4 packet + * Check for Ethernet SNAP vs. DIX format * - * @param pkt Packet handle + * @param[in] pkt Packet handle * - * @return Pointer to L4 packet or NULL if not found + * @return 1 if packet is SNAP, 0 if it is DIX * + * @note This routine indicates whether the referenced packet Ethernet + * is SNAP. If odp_packet_inflag_eth() is 1 and + * odp_packet_inflag_snap() is 0 then the packet is in DIX format. */ -uint8_t *odp_packet_l4(odp_packet_t pkt); +int odp_packet_inflag_snap(odp_packet_t pkt); /** - * Return the byte offset from the packet buffer to the L4 packet + * Control indication of Ethernet SNAP vs. DIX format * - * @param pkt Packet handle + * @param[in] pkt Packet handle + * @param[in] val 1 if packet is SNAP, 0 if it is DIX * - * @return L4 byte offset or ODP_PACKET_OFFSET_INVALID if not found + * @note This routine sets whether the referenced packet Ethernet is + * SNAP. */ -size_t odp_packet_l4_offset(odp_packet_t pkt); +void odp_packet_set_inflag_snap(odp_packet_t pkt, int val); /** - * Set the byte offset to the L4 packet + * Check for jumbo frame + * + * @param[in] pkt Packet handle * - * @param pkt Packet handle - * @param offset L4 byte offset + * @return 1 if packet contains jumbo frame, 0 otherwise + * + * @note This routine indicates whether the referenced packet contains + * a jumbo frame. A jumbo frame has a length greater than 1500 bytes. */ -void odp_packet_set_l4_offset(odp_packet_t pkt, size_t offset); +int odp_packet_inflag_jumbo(odp_packet_t pkt); /** - * Print (debug) information about the packet + * Control indication of jumbo frame + * + * @param[in] pkt Packet handle + * @param[in] val 1 if packet contains jumbo frame, 0 otherwise * - * @param pkt Packet handle + * @note This routine sets whether the referenced packet contains a + * jumbo frame. A jumbo frame has a length greater than 1500 bytes. */ -void odp_packet_print(odp_packet_t pkt); +void odp_packet_set_inflag_jumbo(odp_packet_t pkt, int val); /** - * Copy contents and metadata from pkt_src to pkt_dst - * Useful when creating copies of packets + * Check for VLAN * - * @param pkt_dst Destination packet - * @param pkt_src Source packet + * @param[in] pkt Packet handle * - * @return 0 if successful + * @return 1 if packet contains a VLAN header, 0 otherwise + * + * @note This routine indicates whether the referenced packet contains + * one or more VLAN headers. */ -int odp_packet_copy(odp_packet_t pkt_dst, odp_packet_t pkt_src); +int odp_packet_inflag_vlan(odp_packet_t pkt); /** - * Tests if packet is segmented (a scatter/gather list) + * Control indication of VLAN + * + * @param[in] pkt Packet handle * - * @param pkt Packet handle + * @param[in] val 1 if packet contains a VLAN header, 0 otherwise * - * @return Non-zero if packet is segmented, otherwise 0 + * @note This routine sets whether the referenced packet contains one + * or more VLAN headers. */ -int odp_packet_is_segmented(odp_packet_t pkt); +void odp_packet_set_inflag_vlan(odp_packet_t pkt, int val); /** - * Segment count + * Check for VLAN QinQ (stacked VLAN) * - * Returns number of segments in the packet. A packet has always at least one - * segment (the packet buffer itself). + * @param[in] pkt Packet handle + * + * @return 1 if packet contains a VLAN QinQ header, 0 otherwise + * + * @note This routine indicates whether the referenced packet contains + * a double VLAN header (Q-in-Q) matching the IEEE 802.1ad + * specification. + */ +int odp_packet_inflag_vlan_qinq(odp_packet_t pkt); + +/** + * Controls indication of VLAN QinQ (stacked VLAN) * - * @param pkt Packet handle + * @param[in] pkt Packet handle + * @param[in] val 1 if packet contains a VLAN QinQ header, 0 otherwise * - * @return Segment count + * @note This routine sets whether the referenced packet contains a + * double VLAN header (Q-in-Q) matching the IEEE 802.1ad + * specification. */ -int odp_packet_seg_count(odp_packet_t pkt); +void odp_packet_set_inflag_vlan_qinq(odp_packet_t pkt, int val); /** - * Get segment by index + * Check for ARP + * + * @param[in] pkt Packet handle * - * @param pkt Packet handle - * @param index Segment index (0 ... seg_count-1) + * @return 1 if packet contains an ARP header, 0 otherwise * - * @return Segment handle, or ODP_PACKET_SEG_INVALID on an error + * @note This routine indicates whether the referenced packet contains + * an ARP header. */ -odp_packet_seg_t odp_packet_seg(odp_packet_t pkt, int index); +int odp_packet_inflag_arp(odp_packet_t pkt); /** - * Get next segment + * Controls indication of ARP * - * @param pkt Packet handle - * @param seg Current segment handle + * @param[in] pkt Packet handle + * @param[in] val 1 if packet contains an ARP header, 0 otherwise * - * @return Handle to next segment, or ODP_PACKET_SEG_INVALID on an error + * @note This routine sets whether the referenced packet contains an + * ARP header. */ -odp_packet_seg_t odp_packet_seg_next(odp_packet_t pkt, odp_packet_seg_t seg); +void odp_packet_set_inflag_arp(odp_packet_t pkt, int val); /** - * Segment info + * Check for IPv4 * - * Copies segment parameters into the info structure. + * @param[in] pkt Packet handle + * + * @return 1 if packet contains an IPv4 header, 0 otherwise + * + * @note This routine indicates whether the referenced packet contains + * an IPv4 header. + */ +int odp_packet_inflag_ipv4(odp_packet_t pkt); + +/** + * Control indication of IPv4 * - * @param pkt Packet handle - * @param seg Segment handle - * @param info Pointer to segment info structure + * @param[in] pkt Packet handle + * @param[in] val 1 if packet contains an IPv4 header, 0 otherwise * - * @return 0 if successful, otherwise non-zero + * @note This routine sets whether the referenced packet contains an + * IPv4 header. */ -int odp_packet_seg_info(odp_packet_t pkt, odp_packet_seg_t seg, - odp_packet_seg_info_t *info); +void odp_packet_set_inflag_ipv4(odp_packet_t pkt, int val); /** - * Segment start address + * Check for IPv6 + * + * @param[in] pkt Packet handle * - * @param pkt Packet handle - * @param seg Segment handle + * @return 1 if packet contains an IPv6 header, 0 otherwise * - * @return Segment start address, or NULL on an error + * @note This routine indicates whether the referenced packet contains + * an IPv6 header. */ -void *odp_packet_seg_addr(odp_packet_t pkt, odp_packet_seg_t seg); +int odp_packet_inflag_ipv6(odp_packet_t pkt); /** - * Segment maximum data size + * Control indication of IPv6 * - * @param pkt Packet handle - * @param seg Segment handle + * @param[in] pkt Packet handle + * @param[in] val 1 if packet contains an IPv6 header, 0 otherwise * - * @return Segment maximum data size + * @note This routine sets whether the referenced packet contains an + * IPv6 header. */ -size_t odp_packet_seg_size(odp_packet_t pkt, odp_packet_seg_t seg); +void odp_packet_set_inflag_ipv6(odp_packet_t pkt, int val); /** - * Segment data address + * Check for IP fragment * - * @param pkt Packet handle - * @param seg Segment handle + * @param[in] pkt Packet handle * - * @return Segment data address + * @return 1 if packet is an IP fragment, 0 otherwise + * + * @note This routine indicates whether the referenced packet contains + * an IP fragment. */ -void *odp_packet_seg_data(odp_packet_t pkt, odp_packet_seg_t seg); +int odp_packet_inflag_ipfrag(odp_packet_t pkt); /** - * Segment data length + * Controls indication of IP fragment * - * @param pkt Packet handle - * @param seg Segment handle + * @param[in] pkt Packet handle + * @param[in] val 1 if packet is an IP fragment, 0 otherwise * - * @return Segment data length + * @note This routine sets whether the referenced packet contains an + * IP fragment. */ -size_t odp_packet_seg_data_len(odp_packet_t pkt, odp_packet_seg_t seg); +void odp_packet_set_inflag_ipfrag(odp_packet_t pkt, int val); /** - * Segment headroom + * Check for IP options + * + * @param[in] pkt Packet handle + * + * @return 1 if packet contains IP options, 0 otherwise * - * seg_headroom = seg_data - seg_addr + * @note This routine indicates whether the referenced packet contains + * IP options. + */ +int odp_packet_inflag_ipopt(odp_packet_t pkt); + +/** + * Controls indication of IP options * - * @param pkt Packet handle - * @param seg Segment handle + * @param[in] pkt Packet handle + * @param[in] val 1 if packet contains IP options, 0 otherwise * - * @return Number of octets from seg_addr to seg_data + * @note This routine sets whether the referenced packet contains IP + * options. */ -size_t odp_packet_seg_headroom(odp_packet_t pkt, odp_packet_seg_t seg); +void odp_packet_set_inflag_ipopt(odp_packet_t pkt, int val); /** - * Segment tailroom + * Check for IPSec * - * seg_tailroom = seg_size - seg_headroom - seg_data_len + * @param[in] pkt Packet handle * - * @param pkt Packet handle - * @param seg Segment handle + * @return 1 if packet requires IPSec processing, 0 otherwise * - * @return Number of octets from end-of-data to end-of-segment + * @note This routine indicates whether the referenced packet contains + * an IPSec header (ESP or AH). */ -size_t odp_packet_seg_tailroom(odp_packet_t pkt, odp_packet_seg_t seg); +int odp_packet_inflag_ipsec(odp_packet_t pkt); /** - * Push out segment head + * Control indication of IPSec * - * Push out segment data address (away from data) and increase data length. - * Does not modify packet in case of an error. + * @param[in] pkt Packet handle + * @param[in] val 1 if packet requires IPSec processing, 0 otherwise + * + * @note This routine sets whether the referenced packet contains an + * IPSec header (ESP or AH). + */ +void odp_packet_set_inflag_ipsec(odp_packet_t pkt, int val); + +/** + * Check for UDP * - * seg_data -= len - * seg_data_len += len + * @param[in] pkt Packet handle * - * @param pkt Packet handle - * @param seg Segment handle - * @param len Number of octets to push head (0 ... seg_headroom) + * @return 1 if packet contains a UDP header, 0 otherwise * - * @return New segment data address, or NULL on an error + * @note This routine indicates whether the referenced packet contains + * a UDP header. */ -void *odp_packet_seg_push_head(odp_packet_t pkt, odp_packet_seg_t seg, - size_t len); +int odp_packet_inflag_udp(odp_packet_t pkt); /** - * Pull in segment head + * Control indication of UDP * - * Pull in segment data address (towards data) and decrease data length. - * Does not modify packet in case of an error. + * @param[in] pkt Packet handle + * @param[in] val 1 if packet contains a UDP header, 0 otherwise + * + * @note This routine sets whether the referenced packet contains a + * UDP header. + */ +void odp_packet_set_inflag_udp(odp_packet_t pkt, int val); + +/** + * Check for TCP + * + * @param[in] pkt Packet handle + * + * @return 1 if packet contains a TCP header, 0 otherwise + * + * @note This routine indicates whether the referenced packet contains + * a TCP header. + */ +int odp_packet_inflag_tcp(odp_packet_t pkt); + +/** + * Control indication of TCP + * + * @param[in] pkt Packet handle + * @param[in] val 1 if packet contains a TCP header, 0 otherwise + * + * @note This routine sets whether the referenced packet contains a + * TCP header. + */ +void odp_packet_set_inflag_tcp(odp_packet_t pkt, int val); + +/** + * Check for TCP options + * + * @param[in] pkt Packet handle + * + * @return 1 if packet contains TCP options, 0 otherwise + * + * @note This routine indicates whether the referenced packet contains + * TCP options. + */ +int odp_packet_inflag_tcpopt(odp_packet_t pkt); + +/** + * Controls indication of TCP options + * + * @param[in] pkt Packet handle + * @param[in] val 1 if packet contains TCP options, 0 otherwise + * + * @note This routine sets whether the referenced packet contains TCP + * options. + */ +void odp_packet_set_inflag_tcpopt(odp_packet_t pkt, int val); + +/** + * Check for ICMP + * + * @param[in] pkt Packet handle + * + * @return 1 if packet contains an ICMP header, 0 otherwise + * + * @note This routine indicates whether the referenced packet contains + * an ICMP header. + */ +int odp_packet_inflag_icmp(odp_packet_t pkt); + +/** + * Control indication of ICMP + * + * @param[in] pkt Packet handle + * @param[in] val 1 if packet contains an ICMP header, 0 otherwise + * + * @note This routine sets whether the referenced packet contains an + * ICMP header. + */ +void odp_packet_set_inflag_icmp(odp_packet_t pkt, int val); + +/** + * Query Layer 3 checksum offload override setting + * + * @param[in] pkt Packet handle + * + * @return 0 if no Layer 3 checksum to be performed, 1 if yes, -1 if not set + * + * @note This routine indicates whether Layer 3 checksum offload + * processing is to be performed for the referenced packet. Since + * this is an override bit, if the application has not set this + * attribute an error (-1) is returned indicating that this bit has + * not been specified. + */ +int odp_packet_outflag_l3_chksum(odp_packet_t pkt); + +/** + * Override Layer 3 checksum calculation + * + * @param[in] pkt Packet handle + * @param[in] val 0 if no Layer 3 checksum to be performed, 1 if yes + * + * @return 0 if override successful, -1 if not + * + * @note This routine sets whether Layer 3 checksum offload processing + * is to be performed for the referenced packet. An error return (-1) + * indicates that the implementation is unable to provide per-packet + * overrides of this function. + */ +int odp_packet_set_outflag_l3_chksum(odp_packet_t pkt, int val); + +/** + * Request Layer 4 checksum offload override setting + * + * @param[in] pkt Packet handle + * + * @return 0 if no Layer 4 checksum to be performed, 1 if yes, -1 if not set + * + * @note This routine indicates whether Layer 4 checksum offload + * processing is to be performed for the referenced packet. Since + * this is an override bit, if the application has not set this + * attribute an error (-1) is returned indicating that this bit has + * not been specified. + */ +int odp_packet_outflag_l4_chksum(odp_packet_t pkt); + +/** + * Request L4 checksum calculation + * + * @param[in] pkt Packet handle + * @param[in] val 0 if no Layer 4 checksum to be performed, 1 if yes + * + * @return 0 if override successful, -1 if not + * + * @note This routine specifies whether Layer 4 checksums offload + * processing is to be performed for the referenced packet. An error + * return (-1) indicates that the implementation is unable to provide + * per-packet overrides of this function. + */ +int odp_packet_set_outflag_l4_chksum(odp_packet_t pkt, int val); + +/** + * Get offset of start of Layer 2 headers + * + * @param[in] pkt Packet handle + * + * @return Byte offset into packet of start of Layer 2 headers + * or ODP_PACKET_OFFSET_INVALID if not found. + * + * @note This routine is an accessor function that returns the byte + * offset of the start of the Layer 2 headers of a packet. Results + * are undefined if the supplied pkt does not specify a valid packet. + * Note that if the packet contains unusual Layer 2 tags the caller + * will use this function to allow it to parse the Layer 2 headers + * directly if desired. + * + */ +size_t odp_packet_l2_offset(odp_packet_t pkt); + +/** + * Specify start of Layer 2 headers * - * seg_data += len - * seg_data_len -= len + * @param[in] pkt Packet handle + * @param[in] offset Byte offset into packet of start of Layer 2 headers. * - * @param pkt Packet handle - * @param seg Segment handle - * @param len Number of octets to pull head (0 ... seg_data_len) + * @return 0 on Success, -1 on errors * - * @return New segment data address, or NULL on an error + * @note This routine is an accessor function that sets the byte + * offset of the start of the Layer 2 headers of a packet. Results + * are undefined if the supplied pkt does not specify a valid packet. + * An error return results if the specified offset is out of range. + * Note that this routine does not verify that the specified offset + * correlates with packet contents. The application assumes that + * responsibility when using this routine. */ -void *odp_packet_seg_pull_head(odp_packet_t pkt, odp_packet_seg_t seg, - size_t len); +int odp_packet_set_l2_offset(odp_packet_t pkt, size_t offset); /** - * Push out segment tail + * Returns the VLAN S-Tag and C-Tag associated with packet + * + * @param[in] pkt Packet handle + * @param[out] stag S-Tag associated with packet or 0x00000000 + * @param[out] ctag C-Tag associated with packet or 0x00000000 + * + * @note This routine returns the S-Tag (Ethertype 0x88A8) and C-Tag + * (Ethertype 0x8100) associated with the referenced packet. Note + * that the full tag (including the Ethertype) is returned so that the + * caller can easily distinguish between the two as well as handle + * older sources that use 0x8100 for both tags (QinQ). If the packet + * contains only one VLAN tag, it will be returned as the “S-Tag”. If + * the packet does not contain VLAN tags then both arguments will be + * returned as zeros. + * + * @par + * Note: as metadata values returned by this routine are in + * host-endian format. VLAN tags themselves are always received and + * transmitted in network byte order. * - * Increase segment data length. + */ +void odp_packet_vlans(odp_packet_t pkt, uint32_t *stag, uint32_t *ctag); + +/** + * Specifies the VLAN S-Tag and C-Tag associated with packet + * + * @param[in] pkt Packet handle + * @param[in] stag S-Tag associated with packet or 0xFFFFFFFF + * @param[in] ctag C-Tag associated with packet or 0xFFFFFFFF + * + * @note This routine sets the S-Tag (Ethertype 0x88A8) and C-Tag + * (Ethertype 0x8100) associated with the referenced packet. A value + * of 0xFFFFFFFF is specified to indicate that no corresponding S-Tag + * or C-Tag is present. Note: This routine simply sets the VLAN meta + * data for the packet. It does not affect packet contents. It is + * the caller’s responsibility to ensure that the packet contents + * matches the specified values. + */ +void odp_packet_set_vlans(odp_packet_t pkt, uint32_t stag, uint32_t ctag); + +/** + * Get offset of start of Layer 3 headers + * + * @param[in] pkt Packet handle + * + * @return Byte offset into packet of start of Layer 3 headers + * or ODP_PACKET_OFFSET_INVALID if not found. + * + * @note This routine is an accessor function that returns the byte + * offset of the start of the Layer 3 headers of a packet. Results + * are undefined if the supplied pkt does not specify a valid packet. + * In conjunction with the odp_packet_l3_protocol() routine, this + * routine allows the caller to process the Layer 3 header(s) of the + * packet directly, if desired. + */ +size_t odp_packet_l3_offset(odp_packet_t pkt); + +/** + * Set offset of start of Layer 3 headers + * + * @param[in] pkt Packet handle + * @param[in] offset Byte offset into packet of start of Layer 3 headers + * + * @return 0 on Success, -1 on errors + * + * @note This routine is an accessor function that returns the byte + * offset of the start of the Layer 3 headers of a packet. Results + * are undefined if the supplied pkt does not specify a valid packet. + * An error return results if the specified offset is out of range. + * In conjunction with the odp_packet_set_l3_protocol() routine, this + * routine allows the caller to specify the Layer 3 header metadata + * of the packet directly, if desired. Note that this routine does not + * verify that the specified offset correlates with packet contents. + * The application assumes that responsibility when using this + * routine. + */ +int odp_packet_set_l3_offset(odp_packet_t pkt, size_t offset); + +/** + * Get the Layer 3 protocol of this packet + * + * @param[in] pkt Packet handle + * + * @return Ethertype of the Layer 3 protocol used or + * ODP_NO_L3_PROTOCOL if no Layer 3 protocol exists. + * + * @note This routine returns the IANA-assigned Ethertype of the Layer + * 3 protocol used in the packet. This is the last Layer 2 Ethertype + * that defines the Layer 3 protocol. This is widened from a uint16_t + * to an int to allow for error return codes. Note: This value is + * returned in host-endian format. + */ +uint32_t odp_packet_l3_protocol(odp_packet_t pkt); + +/** + * Set the Layer 3 protocol of this packet + * + * @param[in] pkt Packet handle + * @param[in] pcl Layer 3 protocol value + * + * @note This routine sets the IANA-assigned Ethertype of the Layer 3 + * protocol used in the packet. This is the last Layer 2 Ethertype + * that defines the Layer 3 protocol. Note: This routine simply sets + * the Layer 3 protocol metadata for the packet. It does not affect + * packet contents. It is the caller’s responsibility to ensure that + * the packet contents matches the specified value. + */ +void odp_packet_set_l3_protocol(odp_packet_t pkt, uint16_t pcl); + +/** + * Get offset of start of Layer 4 headers + * + * @param[in] pkt Packet handle + * + * @return Byte offset into packet of start of Layer 4 headers + * or ODP_PACKET_OFFSET_INVALID if not found. + * + * @note This routine is an accessor function that returns the byte + * offset of the start of the Layer 4 headers of a packet. Results + * are undefined if the supplied pkt does not specify a valid packet. + * In conjunction with the odp_packet_l4_protocol() routine, this + * routine allows the caller to process the Layer 4 header associated + * with the packet directly if desired. + */ +size_t odp_packet_l4_offset(odp_packet_t pkt); + +/** + * Set offset of start of Layer 4 headers + * + * @param[in] pkt Packet handle + * @param[in] offset Packet handle + * + * @return 0 on Success, -1 on error. + * + * @note This routine is an accessor function that sets the byte + * offset of the start of the Layer 4 headers of a packet. Results + * are undefined if the supplied pkt does not specify a valid + * packet. An error return results if the specified offset is out of + * range. In conjunction with the odp_packet_set_l4_protocol() + * routine, this routine allows the caller to specify the Layer 4 + * header metadata with the packet directly if desired. Note that + * this routine does not verify that the specified offset correlates + * with packet contents. The application assumes that responsibility + * when using this routine. + */ +int odp_packet_set_l4_offset(odp_packet_t pkt, size_t offset); + +/** + * Get the Layer 4 protocol of this packet + * + * @param[in] pkt Packet handle + * + * @return Protocol number of the Layer 4 protocol used or + * ODP_NO_L4_PROTOCOL if none exists. + * + * @note This routine returns the IANA-assigned Protocol number of the + * Layer 4 protocol used in the packet. This is widened from uint8_t + * to an int to allow for error return codes. + */ +uint32_t odp_packet_l4_protocol(odp_packet_t pkt); + +/** + * Set the Layer 4 protocol of this packet + * + * @param[in] pkt Packet handle + * @param[in] pcl Layer 4 protocol value + * + * @note This routine sets the IANA-assigned Protocol number of the + * Layer 4 protocol used in the packet. Note: This routine simply + * sets the Layer 4 protocol metadata for the packet. It does not + * affect packet contents. It is the caller’s responsibility to + * ensure that the packet contents matches the specified value. + */ +void odp_packet_set_l4_protocol(odp_packet_t pkt, uint8_t pcl); + +/** + * Get offset of start of packet payload + * + * @param[in] pkt Packet handle + * + * @return Byte offset into packet of start of packet payload + * or ODP_PACKET_OFFSET_INVALID if not found. + * + * @note This routine is an accessor function that returns the byte + * offset of the start of the packet payload. Results are undefined + * if the supplied pkt does not specify a valid packet. For ODP, the + * packet payload is defined as the first byte beyond the last packet + * header recognized by the ODP packet parser. For certain protocols + * this may in fact be the start of a Layer 5 header, or an + * unrecognized Layer 3 or Layer 4 header, however ODP does not make + * this distinction. + */ +size_t odp_packet_payload_offset(odp_packet_t pkt); + +/** + * Set offset of start of packet payload + * + * @param[in] pkt Packet handle + * @param[in] offset Packet handle + * + * @return 0 on Success, -1 on error + * + * @note This routine is an accessor function that sets the byte + * offset of the start of the packet payload. Results are undefined + * if the supplied pkt does not specify a valid packet. An error + * return results if the specified offset is out of range. For ODP, + * the packet payload is defined as the first byte beyond the last + * packet header recognized by the ODP packet parser. For certain + * protocols this may in fact be the start of a Layer 5 header, or an + * unrecognized Layer 3 or Layer 4 header, however ODP does not make + * this distinction. Note that this routine does not verify that the + * specified offset correlates with packet contents. The application + * assumes that responsibility when using this routine. + */ +int odp_packet_set_payload_offset(odp_packet_t pkt, size_t offset); + +/** + * Get stored packet input handle from packet + * + * @param[in] pkt ODP packet buffer handle + * + * @return Packet IO handle + * + * @note This access function returns the odp_pktio_t that received + * this packet. This will be ODP_PKTIO_INVALID if the packet was + * created by the application rather than being received. Calling + * odp_packet_init() will also reset this to valid. + */ +odp_pktio_t odp_packet_input(odp_packet_t pkt); + +/** + * Get count of number of segments in a packet + * + * @param[in] pkt Packet handle + * + * @return Count of the number of segments in pkt + * + * @note This routine returns the number of physical segments in the + * referenced packet. A packet that is not in an aggregated buffer + * will return 1 since it is comprised of a single segment. The + * packet segments of the aggregate buffer are in the range + * [0..odp_packet_segment_count-1]. Results are undefined if the + * supplied pkt is invalid. Use odp_packet_is_valid() to verify + * packet validity if needed. + */ +int odp_packet_segment_count(odp_packet_t pkt); + +/** + * Get the segment identifier for a packet segment by index + * + * @param[in] pkt Packet handle + * @param[in] ndx Segment index of segment of interest + * + * @return Segment identifier or ODP_SEGMENT_INVALID if the + * supplied ndx is out of range. + * + * @note This routine returns the abstract identifier + * (odp_packet_segment_t) of a particular segment by its index value. + * Valid ndx values are in the range + * [0..odp_packet_segment_count(pkt)-1]. Results are undefined if the + * supplied pkt is invalid. Use odp_packet_is_valid() to verify + * packet validity if needed. + */ +odp_packet_segment_t odp_packet_segment_by_index(odp_packet_t pkt, size_t ndx); + +/** + * Get the next segment identifier for a packet segment + * + * @param[in] pkt Packet handle + * @param[in] seg Segment identifier of the previous segment + * + * @return Segment identifier of next segment or ODP_SEGMENT_INVALID + * + * @note This routine returns the abstract identifier + * (odp_packet_segment_t) of the next packet segment in a buffer + * aggregate. The input specifies the packet and the previous segment + * identifier. There are three use cases for this routine: + * + * -# If the input seg is ODP_SEGMENT_START then the segment + * identifier returned is that of the first segment in the packet. + * ODP_SEGMENT_NULL MAY be used as a synonym for ODP_SEGMENT_START + * for symmetry if desired. + * + * -# If the input seg is not the last segment in the packet then the + * segment identifier of the next segment following seg is returned. + * + * -# If the input seg is the segment identifier of the last segment + * in the packet then ODP_SEGMENT_NULL is returned. + * + */ +odp_packet_segment_t odp_packet_segment_next(odp_packet_t pkt, + odp_packet_segment_t seg); + +/** + * Get start address for a specified packet segment + * + * @param[in] pkt Packet handle + * @param[in] seg Segment identifier of the packet to be addressed + * @param[out] seglen Returned number of bytes in this packet + * segment available at returned address + * + * @return Start address of packet within segment or NULL + * + * @note This routine is used to obtain addressability to a segment + * within a packet aggregate at a specified segment identifier. The + * returned seglen indicates the number of bytes addressable at the + * returned address. Note that the returned address is always within + * the packet and the address returned is the first packet byte within + * the specified segment. So if the packet itself begins at a + * non-zero byte offset into the physical segment then the address + * returned by this call will not be the same as the starting address + * of the physical segment containing the packet. + */ +void *odp_packet_segment_map(odp_packet_t pkt, odp_packet_segment_t seg, + size_t *seglen); + +/** + * Unmap a packet segment + * + * @param[in] seg Packet segment handle + * + * @note This routine is used to unmap a packet segment previously + * mapped by odp_packet_segment_map(). Following this call, + * applications MUST NOT attempt to reference the segment via any + * pointer returned from a previous odp_packet_segment_map() call + * referring to it. It is intended to allow certain NUMA + * architectures to better manage the coherency of mapped segments. + * For non-NUMA architectures this routine will be a no-op. Note that + * implementations SHOULD implicitly unmap all packet segments + * whenever a packet is freed or added to a queue as this indicates + * that the caller is relinquishing control of the packet. + */ +void odp_packet_segment_unmap(odp_packet_segment_t seg); + +/** + * Get start address for a specified packet offset + * + * @param[in] pkt Packet handle + * @param[in] offset Byte offset within the packet to be addressed + * @param[out] seglen Returned number of bytes in this packet + * segment available at returned address + * + * @return Offset start address or NULL + * + * @note This routine returns the address of the packet starting at + * the specified byte offset. The returned seglen indicates the + * number of addressable bytes available at the returned address. + * This limit MUST be honored by the caller. + * + * @par + * Note that this is a general routine for accessing arbitrary + * byte offsets within a packet and is the bases for the “shortcut” + * APIs described below that access specific parser-identified offsets + * of interest. + * + * @par + * Note also that the returned seglen is always the minimum of + * the physical buffer segment size available at the starting offset + * and odp_packet_len() - offset. This rule applies to the “shortcut” + * routines that follow as well. + * + * @par + * For example, suppose the underlying implementation uses 256 + * byte physical segment sizes and odp_packet_len() is 900. In this + * case a call to odp_packet_map() for offset 200 would return a + * seglen of 56, a call to odp_packet_map() for offset 256 would + * return a seglen of 256, and a call to odp_packet_map() for offset + * 768 would return a seglen of 132 since the packet ends there. + */ +void *odp_packet_offset_map(odp_packet_t pkt, size_t offset, + size_t *seglen); + +/** + * Unmap a packet segment by offset + * + * @param[in] pkt Packet handle + * @param[in] offset Packet offset + * + * @note This routine is used to unmap a buffer segment previously + * implicitly mapped by odp_packet_offset_map(). Following this call + * the application MUST NOT attempt to reference the segment via any + * pointer returned by a prior odp_packet_offset_map() call relating + * to this offset. It is intended to allow certain NUMA architectures + * to better manage the coherency of mapped segments. For non-NUMA + * architectures this routine will be a no-op. Note that + * implementations SHOULD implicitly unmap all packet segments + * whenever a packet is added to a queue as this indicates that the + * caller is relinquishing control of the packet. + */ +void odp_packet_offset_unmap(odp_packet_t pkt, size_t offset); + +/** + * Map packet to provide addressability to it + * + * @param[in] pkt Packet handle + * @param[out] seglen Number of contiguous bytes available at returned address + * + * @return Packet start address or NULL + * + * @note This routine is an accessor function that returns the + * starting address of the packet. This is the first byte that would + * be placed on the wire if the packet were transmitted at the time of + * the call. This is normally the same as the first byte of the + * Ethernet frame that was received, and would normally be the start + * of the L2 header. Behavior of this routine is equivalent to the + * call: + * + * @code + * odp_packet_offset_map(pkt,0,&seglen); + * @endcode + * + * @par + * It is thus a shortcut for rapid access to the raw packet + * headers. Note that the returned seglen is the minimum of the + * packet length and the number of contiguous bytes available in the + * packet segment containing the returned starting address. It is a + * programming error to attempt to address beyond this returned + * length. + * + * @par + * For packets created by odp_packet_alloc() or + * odp_packet_alloc_len() this is the first byte of the allocated + * packet’s contents. Note that in the case of odp_packet_alloc() the + * packet length defaults to 0 and in the case of + * odp_packet_alloc_len() the contents of the packet is indeterminate + * until the application creates that content. Results are undefined + * if the supplied pkt does not represent a valid packet. + * + * @par + * Note that applications would normally not use this routine + * unless they need to do their own parsing of header fields or are + * otherwise directly adding or manipulating their own packet headers. + * Applications SHOULD normally use accessor functions to obtain the + * parsed header information they need directly. + * + */ +void *odp_packet_map(odp_packet_t pkt, size_t *seglen); + +/** + * Get addressability to first packet segment + * + * @param[in] pkt Packet handle + * + * @return Packet start address or NULL + * + * @warning Deprecated API! + * @warning + * This API provides a fast path for addressability to the first + * segment of a packet. Calls to this routine SHOULD be replaced + * with corresponding calls to odp_packet_map() since this routine + * gives no indication of addressing limits of the returned pointer. + */ +void *odp_packet_addr(odp_packet_t pkt); + +/** + * Get address for the preparsed Layer 2 header + * + * @param[in] pkt Packet handle + * @param[out] seglen Returned number of bytes in this packet + * segment available at returned address + * + * @return Layer 2 start address or NULL + * + * @note This routine provides the caller with addressability to the + * first Layer 2 header of the packet, as identified by the ODP + * parser. Note that this may not necessarily represent the first + * byte of the packet as the caller may have pushed additional + * (unparsed) headers onto the packet. Also, if the packet does not + * have a recognized Layer 2 header then this routine will return NULL + * while odp_packet_map() will always return the address of the first + * byte of the packet (even if the packet is of null length). + * + * @par + * Note that the behavior of this routine is identical to the + * call odp_packet_offset_map(pkt,odp_packet_l2_offset(pkt),&seglen). + * + */ +void *odp_packet_l2_map(odp_packet_t pkt, size_t *seglen); + +/** + * Get address for the preparsed Layer 3 header + * + * @param[in] pkt Packet handle + * @param[out] seglen Returned number of bytes in this packet + * segment available at returned address + * + * @return Layer 3 start address or NULL + * + * @note This routine provides the caller with addressability to the + * first Layer 3 header of the packet, as identified by the ODP + * parser. If the packet does not have a recognized Layer 3 header + * then this routine will return NULL. + * + * @par + * Note that the behavior of this routine is identical to the + * call odp_packet_offset_map(pkt,odp_packet_l3_offset(pkt),&seglen). + * + */ +void *odp_packet_l3_map(odp_packet_t pkt, size_t *seglen); + +/** + * Get address for the preparsed Layer 4 header + * + * @param[in] pkt Packet handle + * @param[out] seglen Returned number of bytes in this packet + * segment available at returned address + * + * @return Layer 4 start address or NULL + * + * @note This routine provides the caller with addressability to the + * first Layer 4 header of the packet, as identified by the ODP + * parser. If the packet does not have a recognized Layer 4 header + * then this routine will return NULL. + * + * @par + * Note that the behavior of this routine is identical to the + * call odp_packet_offset_map(pkt,odp_packet_l4_offset(pkt),&seglen). + * + */ +void *odp_packet_l4_map(odp_packet_t pkt, size_t *seglen); + +/** + * Get address for the packet payload + * + * @param[in] pkt Packet handle + * @param[out] seglen Returned number of bytes in this packet + * segment available at returned address + * + * @return Payload start address or NULL + * + * @note This routine provides the caller with addressability to the + * payload of the packet, as identified by the ODP parser. If the + * packet does not have a recognized payload (e.g., a TCP ACK packet) + * then this routine will return NULL. As noted above, ODP defines + * the packet payload to be the first byte after the last recognized + * header. This may in fact represent a Layer 5 header, or an + * unrecognized Layer 3 or Layer 4 header. It is an application + * responsibility to know how to deal with these bytes based on its + * protocol knowledge. + * + * @par + * Note that the behavior of this routine is identical to the call + * odp_packet_offset_map(pkt,odp_packet_payload_offset(pkt),&seglen). + * + */ +void *odp_packet_payload_map(odp_packet_t pkt, size_t *seglen); + +/** + * Clone a packet, returning an exact copy of it + * + * @param[in] pkt Packet handle of packet to duplicate + * + * @return Handle of the duplicated packet or ODP_PACKET_INVALID + * if the operation was not performed + * + * @note This routine allows an ODP packet to be cloned in an + * implementation-defined manner. The contents of the returned + * odp_packet_t is an exact copy of the input packet. The + * implementation MAY perform this operation via reference counts, + * resegmentation, or any other technique it wishes to employ. The + * cloned packet is an element of the same buffer pool as the input + * pkt and shares the same system metadata such as headroom and + * tailroom. If the input pkt contains user metadata, then this data + * MUST be copied to the returned packet by the ODP implementation. + * + * @par + * This routine is OPTIONAL. An implementation that does not + * support this function MUST provide a matching routine that simply + * returns ODP_PACKET_INVALID with an errno of + * ODP_FUNCTION_NOT_AVAILABLE. + */ +odp_packet_t odp_packet_clone(odp_packet_t pkt); + +/** + * Copy a packet, returning an exact copy of it + * + * @param[in] pkt Packet handle of packet to copy + * @param[in] pool Buffer pool to contain copied packet + * + * @return Handle of the copied packet or ODP_PACKET_INVALID + * if the operation was not performed + * + * @note This routine allows an ODP packet to be copied in an + * implementation-defined manner. The specified pool may or may not + * be different from that of the source packet, but if different MUST + * be of type ODP_BUFFER_TYPE_PACKET. The contents of the returned + * odp_packet_t is an exact separate copy of the input packet, and as + * such inherits its initial headroom and tailroom settings from the + * buffer pool from which it is allocated. If the input pkt contains + * user metadata, then this data MUST be copied to the returned + * packet if needed by the ODP implementation. + * + * @par + * This routine is OPTIONAL. An implementation that does not + * support this function MUST provide a matching routine that simply + * returns ODP_PACKET_INVALID with an errno of + * ODP_FUNCTION_NOT_AVAILABLE. + */ +odp_packet_t odp_packet_copy(odp_packet_t pkt, odp_buffer_pool_t pool); + +/** + * Copy selected bytes from one packet to another + * + * @param[in] dstpkt Handle of destination packet + * @param[in] dstoffset Byte offset in destination packet to receive bytes + * @param[in] srcpkt Handle of source packet + * @param[in] srcoffset Byte offset in source packet from which to copy + * @param[in] len Number of bytes to be copied + * + * @return 0 on Success, -1 on errors. + * + * @note This routine copies a slice of an ODP packet to another + * packet in an implementation-defined manner. The call copies len + * bytes starting at srcoffset from srcpkt to offset dstoffset in + * dstpkt. Any existing bytes in the target range of the destination + * packet are overwritten by the operation. The operation will fail + * if sufficient bytes are not available in the source packet or + * sufficient space is not available in the destination packet. This + * routine does not change the length of the destination packet. If + * the caller wishes to extend the destination packet it must first + * push the tailroom of the destination packet to make space available + * to receive the copied bytes. + * + * @par + * This routine is OPTIONAL. An implementation that does not + * support this function MUST provide a matching routine that simply + * returns -1 with an errno of ODP_FUNCTION_NOT_AVAILABLE. + */ +int odp_packet_copy_to_packet(odp_packet_t dstpkt, size_t dstoffset, + odp_packet_t srcpkt, size_t srcoffset, + size_t len); + +/** + * Copy selected bytes from a packet to a memory area + * + * @param[out] mem Address to receive copied bytes + * @param[in] srcpkt Handle of source packet + * @param[in] srcoffset Byte offset in source packet from which to copy + * @param[in] len Number of bytes to be copied + * + * @return 0 on Success, -1 on errors. + * + * @note This routine copies a slice of an ODP packet to an + * application-supplied memory area in an implementation-defined + * manner. The call copies len bytes starting at srcoffset from + * srcpkt to the address specified by mem. Any existing bytes in the + * target memory are overwritten by the operation. The operation will + * fail if sufficient bytes are not available in the source packet. + * It is the caller’s responsibility to ensure that the specified + * memory area is large enough to receive the packet bytes being + * copied. + * + * @par + * This routine is OPTIONAL. An implementation that does not + * support this function MUST provide a matching routine that simply + * returns -1 with an errno of ODP_FUNCTION_NOT_AVAILABLE. + * + */ +int odp_packet_copy_to_memory(void *mem, + odp_packet_t srcpkt, size_t srcoffset, + size_t len); + +/** + * Copy bytes from a memory area to a specified offset in a packet + * + * @param[in] dstpkt Handle of destination packet + * @param[in] dstoffset Byte offset in destination packet to receive bytes + * @param[in] mem Address of bytes to be copied + * @param[in] len Number of bytes to be copied + * + * @return 0 on Success, -1 on errors. + * + * @note This routine copies len bytes from the application memory + * area mem to a specified offset of an ODP packet in an + * implementation-defined manner. Any existing bytes in the target + * range of the destination packet are overwritten by the operation. + * The operation will fail if sufficient space is not available in the + * destination packet. This routine does not change the length of the + * destination packet. If the caller wishes to extend the destination + * packet it must first push the tailroom of the destination packet to + * make space available to receive the copied bytes. + * + * @par + * This routine is OPTIONAL. An implementation that does not + * support this function MUST provide a matching routine that simply + * returns -1 with an errno of ODP_FUNCTION_NOT_AVAILABLE. + * + */ +int odp_packet_copy_from_memory(odp_packet_t dstpkt, size_t dstoffset, + void *mem, size_t len); + +/** + * Split a packet into two packets at a specified split point + * + * @param[in] pkt Handle of packet to split + * @param[in] offset Byte offset within pkt to split packet + * @param[in] hr Headroom of split packet + * @param[in] tr Tailroom of source packet + * + * @return Packet handle of the created split packet + * + * @note This routine splits a packet into two packets at the + * specified byte offset. The odp_packet_t returned by the function + * is the handle of the new packet created at the split point. The new + * (split) packet is allocated from the same buffer pool as the + * original packet. If the original packet was len bytes in length + * then upon return the original packet is of length offset while the + * split packet is of length (len-offset). + * + * @par + * The original packet’s headroom is unchanged by this function. + * The split packet inherits it’s tailroom from the original packet. + * The hr and tr parameters are used to assign new headroom and + * tailroom values to the split and original packets, respectively. + * This operation is illustrated by the following diagrams. Prior to + * the split, the original packet looks like this: + * + * @image html splitbefore.png "Packet before split" width=\textwidth + * @image latex splitbefore.eps "Packet before split" width=\textwidth + * + * @par + * After splitting at the specified split offset the result is this: + * + * @image html splitafter.png "Packet after split" width=\textwidth + * @image latex splitafter.eps "Packet after split" width=\textwidth + * + * @par + * The data from the original packet from the specified split + * offset to the end of the original packet becomes the split packet. + * The packet data at the split point becomes offset 0 of the new + * packet created by the split. The split packet inherits the + * original packet’s tailroom and is assigned its own headroom from + * hr, while the original packet retains its original headroom while + * being assigned a new tailroom from tr. + * + * @par + * Upon return from this function, the system metadata for both + * packets has been updated appropriately by the call since system + * metadata maintenance is the responsibility of the ODP + * implementation. Any required updates to the user metadata is the + * responsibility of the caller. + * + * @par + * This routine is OPTIONAL. An implementation that does not + * support this function MUST provide a matching routine that simply + * returns ODP_PACKET_INVALID with an errno of + * ODP_FUNCTION_NOT_AVAILABLE. + */ +odp_packet_t odp_packet_split(odp_packet_t pkt, size_t offset, + size_t hr, size_t tr); + +/** + * Join two packets into a single packet + * + * @param[in] pkt1 Packet handle of first packet to join + * @param[in] pkt2 Packet handle of second packet to join + * + * @return Packet handle of the joined packet + * + * @note This routine joins two packets into a single packet. Both + * pkt1 and pkt2 MUST be from the same buffer pool and the resulting + * joined packet will be an element of that same pool. The + * application MUST NOT assume that either pkt1 or pkt2 survive the + * join or that the returned joined packet is contiguous with or + * otherwise related to the input packets. An implementation SHOULD + * free either or both input packets if they are not reused as part of + * the construction of the returned joined packet. If the join cannot + * be performed (e.g., if the two input packets are not from the same + * buffer pool, insufficient space in the target buffer pool, etc.) + * then ODP_PACKET_INVALID SHOULD be returned to indicate that the + * operation could not be performed, and an appropriate errno set. In + * such case the input packets MUST NOT be freed as part of the failed + * join attempt and MUST be unchanged from their input values and + * content. + * + * @par + * The result of odp_packet_join() is the logical concatenation + * of the two packets using an implementation-defined aggregation + * mechanism. The application data contents of the returned packet is + * identical to that of the two joined input packets however certain + * associated metadata (e.g., information about the packet length) + * will likely differ. The headroom associated with the joined packet + * is the headroom of pkt1 while the tailroom of the joined packet is + * the tailroom of pkt2. Any tailroom from pkt1 or headroom from pkt2 + * from before the join is handled in an implementation-defined manner + * and is no longer visible to the application. + * + * @par + * If user metadata is present in the input packets, then the + * user metadata associated with the returned packet MUST be copied + * by this routine from the source pkt1. + * + * @par + * This routine is OPTIONAL. An implementation that does not + * support this function MUST provide a routine matching that simply + * returns ODP_PACKET_INVALID with an errno of + * ODP_FUNCTION_NOT_AVAILABLE. + */ +odp_packet_t odp_packet_join(odp_packet_t pkt1, odp_packet_t pkt2); + +/** + * Push out packet head + * + * Push out packet address (away from data) and increase data length. * Does not modify packet in case of an error. * - * seg_data_len += len + * @code + * odp_packet_headroom -= len + * odp_packet_len += len + * odp_packet_l2_offset += len + * odp_packet_l3_offset += len + * odp_packet_l4_offset += len + * odp_packet_payload_offset += len + * @endcode + * + * @param[in] pkt Packet handle + * @param[in] len Number of octets to push head [0...odp_packet_headroom] + * + * @return 0 on Success, -1 on error + * + * @note This routine pushes the packet start away from the current + * start point and into the packet headroom. This would normally be + * used by the application to prepend additional header information to + * the start of the packet. Note that pushing the header does not + * affect the parse results. Upon completion odp_packet_map() now + * points to the new start of the packet data area and + * odp_packet_len() is increased by the specified len. + * + * @par + * Note that it is the caller’s responsibility to initialize the + * new header area with meaningful data. This routine simply + * manipulates packet metadata and does not affect packet contents. + * The specified len is added to the following: + * + * - odp_packet_l2_offset + * - odp_packet_l3_offset + * - odp_packet_l4_offset + * - odp_packet_payload_offset + * - odp_packet_len + * + * @par + * In addition odp_packet_headroom is decremented by the specified len. + * + * @par + * Note that this routine simply adjusts the headroom and other + * metadata. If the caller also wishes to immediately address the + * newly added header area it can use the + * odp_packet_push_head_and_map() routine instead. + */ +int odp_packet_push_head(odp_packet_t pkt, size_t len); + +/** + * Push out packet head and map resulting packet * - * @param pkt Packet handle - * @param seg Segment handle - * @param len Number of octets to push tail (0 ... seg_tailroom) + * Push out packet address (away from data) and increase data length. + * Does not modify packet in case of an error. * - * @return New segment data length, or -1 on an error + * @code + * odp_packet_headroom -= len + * odp_packet_len += len + * odp_packet_l2_offset += len + * odp_packet_l3_offset += len + * odp_packet_l4_offset += len + * odp_packet_payload_offset += len + * @endcode + * + * @param[in] pkt Packet handle + * @param[in] len Number of octets to push head [0...odp_packet_headroom] + * @param[out] seglen Number of addressable bytes at returned start address + * + * @return New packet data start address, or NULL on an error + * + * @note This routine pushes the packet start away from the current + * start point and into the packet headroom. This would normally be + * used by the application to prepend additional header information to + * the start of the packet. Note that pushing the header does not + * affect the parse results. Upon completion odp_packet_map() now + * points to the new start of the packet data area and + * odp_packet_len() is increased by the specified len. + * + * @par + * The returned seglen specifies the number of contiguously + * addressable bytes available at the returned start address. The + * caller MUST NOT attempt to address beyond this range. To access + * additional parts of the packet following odp_packet_push_head() the + * odp_packet_offset_map() routine SHOULD be used. + * + * @par + * Note that it is the caller’s responsibility to initialize the + * new header area with meaningful data. This routine simply + * manipulates packet metadata and does not affect packet contents. + * The specified len is added to the following: + * + * - odp_packet_l2_offset + * - odp_packet_l3_offset + * - odp_packet_l4_offset + * - odp_packet_payload_offset + * - odp_packet_len + * + * @par + * In addition odp_packet_headroom is decremented by the specified len. + * + * @par + * This routine is equivalent to the following code: + * + * @code + * odp_packet_push_head(pkt,len); + * void *result = odp_packet_map(pkt,&seglen); + * @endcode + * + * @par + * It exists for application convenience and MAY offer + * implementation efficiency. */ -int odp_packet_seg_push_tail(odp_packet_t pkt, odp_packet_seg_t seg, - size_t len); +void *odp_packet_push_head_and_map(odp_packet_t pkt, size_t len, + size_t *seglen); /** - * Pull in segment tail + * Pull in packet head * - * Decrease segment data length. + * Pull in packet address (consuming data) and decrease data length. * Does not modify packet in case of an error. * - * seg_data_len -= len + * @code + * odp_packet_headroom += len + * odp_packet_len -= len + * odp_packet_l2_offset -= len + * odp_packet_l3_offset -= len + * odp_packet_l4_offset -= len + * odp_packet_payload_offset -= len + * @endcode + * + * @param[in] pkt Packet handle + * @param[in] len Number of octets to pull head [0...odp_packet_len] + * + * @return 0 on Success, -1 on error + * + * @note This routine pulls (consumes) bytes from the start of a + * packet, adding to the packet headroom. Typical use of this is to + * remove (pop) headers from a packet, possibly prior to pushing new + * headers. odp_packet_len() is decreased to reflect the shortened + * packet data resulting from the pull. This routine does not affect + * the contents of the packet, only metadata that describes it. The + * affected parsed offsets are decremented by the specified len, + * however no offset is decremented below 0. + * + * @par + * Note: Since odp_packet_push_head() and odp_packet_pull_head() + * simply manipulate metadata, it is likely that the meaning of the + * pre-parsed header offsets may be lost if headers are stripped and + * new headers are inserted. If the application is doing significant + * header manipulation, it MAY wish to call odp_packet_parse() when it + * is finished to cause the packet to be reparsed and the meaning of + * the various parsed metadata to be restored to reflect the new + * packet contents. + */ +int odp_packet_pull_head(odp_packet_t pkt, size_t len); + +/** + * Pull in packet head and make results addressable to caller + * + * Pull in packet address (consuming data) and decrease data length. + * Does not modify packet in case of an error. + * + * @code + * odp_packet_headroom += len + * odp_packet_len -= len + * odp_packet_l2_offset -= len + * odp_packet_l3_offset -= len + * odp_packet_l4_offset -= len + * odp_packet_payload_offset -= len + * @endcode + * + * @param[in] pkt Packet handle + * @param[in] len Number of octets to pull head [0...odp_packet_len] + * @param[out] seglen Number of addressable bytes at returned start address + * + * @return New packet data start address, or NULL on an error + * + * @note This routine pulls (consumes) bytes from the start of a + * packet, adding to the packet headroom. Typical use of this is to + * remove (pop) headers from a packet, possibly prior to pushing new + * headers. The return value of this routine is the new + * odp_packet_map() for the packet and odp_packet_len() is decreased + * to reflect the shortened packet data resulting from the pull. This + * routine does not affect the contents of the packet, only metadata + * that describes it. The affected parsed offsets are decremented by + * the specified len, however no offset is decremented below 0. + * + * @par + * Note: Since odp_packet_push_head() and odp_packet_pull_head() + * simply manipulate metadata, it is likely that the meaning of the + * pre-parsed header offsets may be lost if headers are stripped and + * new headers are inserted. If the application is doing significant + * header manipulation, it MAY wish to call odp_packet_parse() when it + * is finished to cause the packet to be reparsed and the meaning of + * the various parsed metadata to be restored to reflect the new + * packet contents. + * + * @par + * Note that this routine is equivalent to the calls: + * + * @code + * odp_packet_pull_head(pkt,len); + * void *result = odp_packet_map(pkt,&seglen); + * @endcode + * + * @par + * It exists for application convenience and MAY offer + * implementation efficiency. + */ +void *odp_packet_pull_head_and_map(odp_packet_t pkt, size_t len, + size_t *seglen); + +/** + * Push out packet tail + * + * Push out the end of the packet, consuming tailroom and increasing + * its length. Does not modify packet in case of an error. + * + * @code + * odp_packet_len += len + * odp_packet_tailroom -= len + * @endcode + * + * @param[in] pkt Packet handle + * @param[in] len Number of octets to push tail [0...odp_packet_tailroom] + * + * @return 0 on Success, -1 on Failure + * + * @note This routine adds additional bytes to the end of a packet, + * increasing its length. Note that it does not change the contents + * of the packet but simply manipulates the packet metadata. It is + * the caller’s responsibility to initialize the new area with + * meaningful packet data. + * + * @par The intended use of this routine is to allow the application + * to insert additional payload or trailers onto the packet. + */ +int odp_packet_push_tail(odp_packet_t pkt, size_t len); + +/** + * Push out packet tail and map results + * + * Push out the end of the packet, consuming tailroom and increasing + * its length. Does not modify packet in case of an error. + * + * @code + * odp_packet_len += len + * odp_packet_tailroom -= len + * @endcode + * + * @param[in] pkt Packet handle + * @param[in] len Number of octets to push tail [0...odp_packet_tailroom] + * @param[out] seglen Number of addressable bytes at returned data address + * + * @return Address of start of additional packet data, or NULL on an error + * + * @note This routine adds additional bytes to the end of a packet, + * increasing its length. Note that it does not change the contents + * of the packet but simply manipulates the packet metadata. It is + * the caller’s responsibility to initialize the new area with + * meaningful packet data. + * + * @par + * This routine is equivalent to the code: + * + * @code + * void *dataptr; + * size_t *seglen; + * odp_packet_push_tail(pkt, len); + * dataptr = odp_packet_offset_map(pkt, odp_packet_len(pkt) - len, &seglen); + * @endcode + * + * @par + * The returned pointer is the mapped start of the new data area + * (beginning at the former odp_packet_len() offset) and the returned + * seglen is the number of contiguously addressable bytes available at + * that address. The caller should initialize the additional data + * bytes to meaningful values. If seglen is less than the requested + * len then odp_packet_offset_map() should be used to address the + * remaining bytes. + * + * @par + * The intended use of this routine is to allow the application + * to insert additional payload or trailers onto the packet. + */ +void *odp_packet_push_tail_and_map(odp_packet_t pkt, size_t len, + size_t *seglen); + +/** + * Pull in packet tail + * + * Reduce packet length, trimming data from the end of the packet, + * and adding to its tailroom. Does not modify packet in case of an error. + * + * @code + * odp_packet_len -= len + * odp_packet_tailroom += len + * @endcode + * + * @param[in] pkt Packet handle + * @param[in] len Number of octets to pull tail [0...odp_packet_len] * - * @param pkt Packet handle - * @param seg Segment handle - * @param len Number of octets to pull tail (0 ... seg_data_len) + * @return 0 on Success, -1 on failure. * - * @return New segment data length, or -1 on an error + * @note This routine pulls in the packet tail, adding those bytes to + * the packet tailroom. Upon successful return the packet has been + * trimmed by len bytes. The intended use of this routine is to allow + * the application to remove tailers from the packet. */ -int odp_packet_seg_pull_tail(odp_packet_t pkt, odp_packet_seg_t seg, - size_t len); +int odp_packet_pull_tail(odp_packet_t pkt, size_t len); /** * @} diff --git a/platform/linux-generic/include/api/odp_packet_io.h b/platform/linux-generic/include/api/odp_packet_io.h index 360636d..f6824a0 100644 --- a/platform/linux-generic/include/api/odp_packet_io.h +++ b/platform/linux-generic/include/api/odp_packet_io.h @@ -19,6 +19,7 @@ extern "C" { #endif #include +#include #include #include #include @@ -28,9 +29,6 @@ extern "C" { * @{ */ -/** ODP packet IO handle */ -typedef uint32_t odp_pktio_t; - /** Invalid packet IO handle */ #define ODP_PKTIO_INVALID 0 diff --git a/platform/linux-generic/include/api/odp_typedefs.h b/platform/linux-generic/include/api/odp_typedefs.h new file mode 100644 index 0000000..ed11e3b --- /dev/null +++ b/platform/linux-generic/include/api/odp_typedefs.h @@ -0,0 +1,60 @@ +/* Copyright (c) 2014, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + + +/** + * @file + * + * @par ODP implementation typedefs + * This file contains all of the implementation-defined typedefs for + * ODP abstract types. Having this in one file means that other ODP + * API files are implementation-independent and avoids include order + * dependencies for files that refer to types managed by other + * components. + */ + +#ifndef ODP_TYPEDEFS_H_ +#define ODP_TYPEDEFS_H_ + +/** ODP Buffer pool */ +typedef uint32_t odp_buffer_pool_t; + +/** Invalid buffer pool */ +#define ODP_BUFFER_POOL_INVALID 0 + +/** ODP buffer */ +typedef uint32_t odp_buffer_t; + +/** ODP buffer segment */ +typedef uint32_t odp_buffer_segment_t; + +/** ODP buffer type */ +typedef enum odp_buffer_type { + ODP_BUFER_TYPE_INVALID = -1, /**< Buffer type invalid */ + ODP_BUFFER_TYPE_ANY = 0, /**< Buffer type can hold any other + buffer type */ + ODP_BUFFER_TYPE_RAW = 1, /**< Raw buffer, + no additional metadata */ + ODP_BUFFER_TYPE_PACKET = 2, /**< Packet buffer */ + ODP_BUFFER_TYPE_TIMEOUT = 3, /**< Timeout buffer */ +} odp_buffer_type_e; + +/** ODP packet */ +typedef odp_buffer_t odp_packet_t; + +/** Invalid packet */ +#define ODP_PACKET_INVALID (odp_packet_t)(-1) + +/** ODP packet segmente */ +typedef odp_buffer_segment_t odp_packet_segment_t; + +/** Invalid packet segment */ +#define ODP_PACKET_SEGMENT_INVALID (odp_packet_segment_t)(-1) + +/** ODP packet IO handle */ +typedef uint32_t odp_pktio_t; + +#endif diff --git a/platform/linux-generic/include/odp_buffer_inlines.h b/platform/linux-generic/include/odp_buffer_inlines.h new file mode 100644 index 0000000..0442bd0 --- /dev/null +++ b/platform/linux-generic/include/odp_buffer_inlines.h @@ -0,0 +1,234 @@ +/* Copyright (c) 2014, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @file + * + * Inline functions for ODP buffer mgmt routines - implementation internal + */ + +#ifndef ODP_BUFFER_INLINES_H_ +#define ODP_BUFFER_INLINES_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) +{ + return hdr->buf_hdl.handle; +} + +static inline odp_buffer_t odp_buffer_encode_handle(odp_buffer_hdr_t *hdr) +{ + odp_buffer_bits_t handle; + uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl); + struct pool_entry_s *pool = get_pool_entry(pool_id); + + handle.pool_id = pool_id; + handle.index = ((uint8_t *)hdr - pool->pool_base_addr) / + ODP_CACHE_LINE_SIZE; + handle.seg = 0; + + return handle.u32; +} + +static inline odp_buffer_segment_t odp_hdr_to_seg(odp_buffer_hdr_t *hdr, + size_t ndx) +{ + odp_buffer_bits_t handle; + uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl); + struct pool_entry_s *pool = get_pool_entry(pool_id); + + handle.pool_id = pool_id; + handle.index = ((uint8_t *)hdr - pool->pool_base_addr) / + ODP_CACHE_LINE_SIZE; + handle.seg = ndx; + + return handle.u32; +} + +static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) +{ + odp_buffer_bits_t handle; + uint32_t pool_id; + uint32_t index; + struct pool_entry_s *pool; + + handle.u32 = buf; + pool_id = handle.pool_id; + index = handle.index; + +#ifdef POOL_ERROR_CHECK + if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { + ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); + return NULL; + } +#endif + + pool = get_pool_entry(pool_id); + +#ifdef POOL_ERROR_CHECK + if (odp_unlikely(index > pool->num_bufs - 1)) { + ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); + return NULL; + } +#endif + + return (odp_buffer_hdr_t *)(void *) + (pool->pool_base_addr + (index * ODP_CACHE_LINE_SIZE)); +} + +static inline uint32_t odp_buffer_refcount(odp_buffer_hdr_t *buf) +{ + return buf->ref_count; +} + +static inline uint32_t odp_buffer_incr_refcount(odp_buffer_hdr_t *buf, + uint32_t val) +{ + return odp_atomic_fetch_add_u32(&buf->ref_count, val) + val; +} + +static inline uint32_t odp_buffer_decr_refcount(odp_buffer_hdr_t *buf, + uint32_t val) +{ + uint32_t tmp; + + tmp = odp_atomic_fetch_sub_u32(&buf->ref_count, val); + + if (tmp < val) { + odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); + return 0; + } else { + return tmp - val; + } +} + +static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) +{ + odp_buffer_bits_t handle; + odp_buffer_hdr_t *buf_hdr; + handle.u32 = buf; + + /* For buffer handles, segment index must be 0 */ + if (handle.seg != 0) + return NULL; + + pool_entry_t *pool = odp_pool_to_entry(handle.pool_id); + + /* If pool not created, handle is invalid */ + if (pool->s.shm == ODP_SHM_INVALID) + return NULL; + + /* A valid buffer index must be on stride, and must be in range */ + if ((handle.index % pool->s.buf_stride != 0) || + ((uint32_t)(handle.index / pool->s.buf_stride) >= + pool->s.num_bufs)) + return NULL; + + buf_hdr = (odp_buffer_hdr_t *)(void *) + (pool->s.pool_base_addr + + (handle.index * ODP_CACHE_LINE_SIZE)); + + /* Handle is valid, so buffer is valid if it is allocated */ + if (buf_hdr->segcount == 0) + return NULL; + else + return buf_hdr; +} + +int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); + +static inline void *buffer_map(odp_buffer_hdr_t *buf, + size_t offset, + size_t *seglen, + size_t limit) +{ + int seg_index = offset / buf->segsize; + int seg_offset = offset % buf->segsize; + size_t buf_left = limit - offset; + + *seglen = buf_left < buf->segsize ? + buf_left : buf->segsize - seg_offset; + + return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]); +} + +static inline odp_buffer_segment_t buffer_segment(odp_buffer_hdr_t *buf, + size_t ndx) +{ + odp_buffer_bits_t seghandle; + seghandle.u32 = buf->buf_hdl.u32; + + if (ndx > buf->segcount) { + return ODP_SEGMENT_INVALID; + } else { + seghandle.seg = ndx; + return seghandle.handle; + } +} + +static inline odp_buffer_segment_t segment_next(odp_buffer_hdr_t *buf, + odp_buffer_segment_t seg) +{ + odp_buffer_bits_t seghandle; + seghandle.u32 = seg; + + + if (seg == ODP_SEGMENT_INVALID) + return (odp_buffer_segment_t)buf->buf_hdl.u32; + + if (seghandle.prefix != buf->buf_hdl.prefix || + seghandle.seg >= buf->segcount) { + return ODP_SEGMENT_INVALID; + } else { + seghandle.seg++; + return (odp_buffer_segment_t)seghandle.u32; + } +} + +static inline void *segment_map(odp_buffer_hdr_t *buf, + odp_buffer_segment_t seg, + size_t *seglen, + size_t limit, + size_t hr) +{ + size_t seg_offset, buf_left; + odp_buffer_bits_t seghandle; + uint8_t *seg_addr; + seghandle.u32 = seg; + + if (seghandle.prefix != buf->buf_hdl.prefix || + seghandle.seg >= buf->segcount) + return NULL; + + seg_addr = (uint8_t *)buf->addr[seghandle.seg]; + seg_offset = seghandle.seg * buf->segsize; + + /* Special handling for packets to account for headroom */ + if (hr > 0) { + /* Can't map this segment if it's nothing but hr */ + if (hr >= seg_offset + buf->segsize) + return NULL; + + /* ...else adjust for hr */ + seg_addr += hr % buf->segsize; + limit += hr % buf->segsize; + } + + buf_left = limit - seg_offset; + *seglen = buf_left < buf->segsize ? buf_left : buf->segsize; + + return (void *)seg_addr; +} + + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index 0027bfc..f4ef956 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -20,26 +20,39 @@ extern "C" { #include #include -#include #include #include #include - -/* TODO: move these to correct files */ - -typedef uint64_t odp_phys_addr_t; - -#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) -#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) - -#define ODP_BUFS_PER_CHUNK 16 -#define ODP_BUFS_PER_SCATTER 4 - -#define ODP_BUFFER_TYPE_CHUNK 0xffff - +#include + +#define ODP_BUFFER_MAX_SEG (ODP_CONFIG_BUF_MAX_SIZE/ODP_CONFIG_BUF_SEG_SIZE) + +ODP_STATIC_ASSERT((ODP_CONFIG_BUF_SEG_SIZE % ODP_CACHE_LINE_SIZE) == 0, + "ODP Segment size must be a multiple of cache line size"); + +#define ODP_SEGBITS(x) \ + ((x) < 2 ? 1 : \ + ((x) < 4 ? 2 : \ + ((x) < 8 ? 3 : \ + ((x) < 16 ? 4 : \ + ((x) < 32 ? 5 : \ + ((x) < 64 ? 6 : \ + ((x) < 128 ? 7 : \ + ((x) < 256 ? 8 : \ + ((x) < 512 ? 9 : \ + ((x) < 1024 ? 10 : \ + ((x) < 2048 ? 11 : \ + ((x) < 4096 ? 12 : \ + (0/0))))))))))))) + +ODP_STATIC_ASSERT(ODP_SEGBITS(ODP_BUFFER_MAX_SEG) < + ODP_SEGBITS(ODP_CACHE_LINE_SIZE), + "Number of segments must not exceed log of cache line size"); #define ODP_BUFFER_POOL_BITS 4 -#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS) +#define ODP_BUFFER_SEG_BITS ODP_SEGBITS(ODP_CACHE_LINE_SIZE) +#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS - ODP_BUFFER_SEG_BITS) +#define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS + ODP_BUFFER_INDEX_BITS) #define ODP_BUFFER_MAX_POOLS (1 << ODP_BUFFER_POOL_BITS) #define ODP_BUFFER_MAX_BUFFERS (1 << ODP_BUFFER_INDEX_BITS) @@ -50,73 +63,48 @@ typedef union odp_buffer_bits_t { struct { uint32_t pool_id:ODP_BUFFER_POOL_BITS; uint32_t index:ODP_BUFFER_INDEX_BITS; + uint32_t seg:ODP_BUFFER_SEG_BITS; }; -} odp_buffer_bits_t; + struct { + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; + }; +} odp_buffer_bits_t; /* forward declaration */ struct odp_buffer_hdr_t; - -/* - * Scatter/gather list of buffers - */ -typedef struct odp_buffer_scatter_t { - /* buffer pointers */ - struct odp_buffer_hdr_t *buf[ODP_BUFS_PER_SCATTER]; - int num_bufs; /* num buffers */ - int pos; /* position on the list */ - size_t total_len; /* Total length */ -} odp_buffer_scatter_t; - - -/* - * Chunk of buffers (in single pool) - */ -typedef struct odp_buffer_chunk_t { - uint32_t num_bufs; /* num buffers */ - uint32_t buf_index[ODP_BUFS_PER_CHUNK]; /* buffers */ -} odp_buffer_chunk_t; - - /* Common buffer header */ typedef struct odp_buffer_hdr_t { struct odp_buffer_hdr_t *next; /* next buf in a list */ - odp_buffer_bits_t handle; /* handle */ - odp_phys_addr_t phys_addr; /* physical data start address */ - void *addr; /* virtual data start address */ - uint32_t index; /* buf index in the pool */ + odp_buffer_bits_t buf_hdl; /* handle */ size_t size; /* max data size */ - size_t cur_offset; /* current offset */ odp_atomic_u32_t ref_count; /* reference count */ - odp_buffer_scatter_t scatter; /* Scatter/gather list */ - int type; /* type of next header */ + odp_buffer_type_e type; /* type of next header */ odp_buffer_pool_t pool_hdl; /* buffer pool handle */ - + void *udata_addr; /* user meta data addr */ + size_t udata_size; /* size of user meta data */ + uint32_t segcount; /* segment count */ + uint32_t segsize; /* segment size */ + void *addr[ODP_BUFFER_MAX_SEG]; /* Block addrs */ } odp_buffer_hdr_t; -/* Ensure next header starts from 8 byte align */ -ODP_STATIC_ASSERT((sizeof(odp_buffer_hdr_t) % 8) == 0, "ODP_BUFFER_HDR_T__SIZE_ERROR"); - - -/* Raw buffer header */ -typedef struct { - odp_buffer_hdr_t buf_hdr; /* common buffer header */ - uint8_t buf_data[]; /* start of buffer data area */ -} odp_raw_buffer_hdr_t; - - -/* Chunk header */ -typedef struct odp_buffer_chunk_hdr_t { - odp_buffer_hdr_t buf_hdr; - odp_buffer_chunk_t chunk; -} odp_buffer_chunk_hdr_t; - +typedef struct odp_buffer_hdr_stride { + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t))]; +} odp_buffer_hdr_stride; -int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); +typedef struct odp_buf_blk_t { + struct odp_buf_blk_t *next; + struct odp_buf_blk_t *prev; +} odp_buf_blk_t; -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t buf_src); +/* Forward declarations */ +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool, size_t size); +int buffer_copy_to_buffer(odp_buffer_t dstbuf, size_t dstoffset, + odp_buffer_t srcbuf, size_t srcoffset, + size_t len); #ifdef __cplusplus } diff --git a/platform/linux-generic/include/odp_buffer_pool_internal.h b/platform/linux-generic/include/odp_buffer_pool_internal.h index e0210bd..5d792c8 100644 --- a/platform/linux-generic/include/odp_buffer_pool_internal.h +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h @@ -24,6 +24,7 @@ extern "C" { #include #include #include +#include #include /* Use ticketlock instead of spinlock */ @@ -47,66 +48,146 @@ struct pool_entry_s { odp_spinlock_t lock ODP_ALIGNED_CACHE; #endif - odp_buffer_chunk_hdr_t *head; - uint64_t free_bufs; char name[ODP_BUFFER_POOL_NAME_LEN]; - odp_buffer_pool_t pool_hdl ODP_ALIGNED_CACHE; - uintptr_t buf_base; - size_t buf_size; - size_t buf_offset; + odp_buffer_pool_t pool_hdl; + odp_buffer_pool_param_t params; + odp_buffer_pool_init_t init_params; + odp_shm_t shm; + union { + uint32_t all; + struct { + uint32_t unsegmented:1; + uint32_t predefined:1; + }; + } flags; + uint8_t *pool_base_addr; + size_t pool_size; + int buf_stride; + uint8_t *udata_base_addr; + int buf_udata_size; + int udata_stride; + odp_buffer_hdr_t *buf_freelist; + uint8_t *blk_freelist; + odp_atomic_u32_t bufcount; uint64_t num_bufs; - void *pool_base_addr; - uint64_t pool_size; - size_t user_size; - size_t user_align; - int buf_type; - size_t hdr_size; + size_t seg_size; + size_t high_wm; + size_t low_wm; + size_t headroom; + size_t tailroom; }; +typedef union pool_entry_u { + struct pool_entry_s s; + + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct pool_entry_s))]; + +} pool_entry_t; extern void *pool_entry_ptr[]; +#if UINTPTR_MAX == 0xffffffffffffffff +#define odp_at odp_atomic_u64_t +#define odp_cs(p, o, n) odp_atomic_cmpset_u64((odp_at *)(void *)(p), \ + (uint64_t)(o), (uint64_t)(n)) +#else +#define odp_at odp_atomic_u32_t +#define odp_cs(p, o, n) odp_atomic_cmpset_u32((odp_at *)(void *)(p), \ + (uint32_t)(o), (uint32_t)(n)) +#endif -static inline void *get_pool_entry(uint32_t pool_id) +/* This macro suggested by Shmulik Ladkani */ +#define odp_ref(p) \ + ((typeof(p))(uintptr_t) *(volatile typeof(p) const *)&(p)) + +static inline void *get_blk(struct pool_entry_s *pool) { - return pool_entry_ptr[pool_id]; + void *oldhead, *newhead; + + do { + oldhead = odp_ref(pool->blk_freelist); + if (oldhead == NULL) + break; + newhead = ((odp_buf_blk_t *)oldhead)->next; + } while (odp_cs(&pool->blk_freelist, oldhead, newhead) == 0); + + return (void *)oldhead; } +static inline void ret_blk(struct pool_entry_s *pool, void *block) +{ + void *oldhead; -static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) + do { + oldhead = odp_ref(pool->blk_freelist); + ((odp_buf_blk_t *)block)->next = oldhead; + } while (odp_cs(&pool->blk_freelist, oldhead, block) == 0); +} + +static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool) { - odp_buffer_bits_t handle; - uint32_t pool_id; - uint32_t index; - struct pool_entry_s *pool; - odp_buffer_hdr_t *hdr; - - handle.u32 = buf; - pool_id = handle.pool_id; - index = handle.index; - -#ifdef POOL_ERROR_CHECK - if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { - ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); - return NULL; + odp_buffer_hdr_t *oldhead, *newhead; + + do { + oldhead = odp_ref(pool->buf_freelist); + if (oldhead == NULL) + break; + newhead = oldhead->next; + } while (odp_cs(&pool->buf_freelist, oldhead, newhead) == 0); + + if (oldhead != NULL) { + oldhead->next = oldhead; + odp_atomic_inc_u32(&pool->bufcount); } -#endif - pool = get_pool_entry(pool_id); + return (void *)oldhead; +} -#ifdef POOL_ERROR_CHECK - if (odp_unlikely(index > pool->num_bufs - 1)) { - ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); - return NULL; - } -#endif +static inline void ret_buf(struct pool_entry_s *pool, odp_buffer_hdr_t *buf) +{ + odp_buffer_hdr_t *oldhead; - hdr = (odp_buffer_hdr_t *)(pool->buf_base + index * pool->buf_size); + while (buf->segcount > 0) + ret_blk(pool, buf->addr[--buf->segcount]); - return hdr; + do { + oldhead = odp_ref(pool->buf_freelist); + buf->next = oldhead; + } while (odp_cs(&pool->buf_freelist, oldhead, buf) == 0); + + odp_atomic_dec_u32(&pool->bufcount); } +static inline odp_buffer_pool_t pool_index_to_handle(uint32_t pool_id) +{ + return pool_id + 1; +} + +static inline uint32_t pool_handle_to_index(odp_buffer_pool_t pool_hdl) +{ + return pool_hdl - 1; +} + +static inline void *get_pool_entry(uint32_t pool_id) +{ + return pool_entry_ptr[pool_id]; +} + +static inline pool_entry_t *odp_pool_to_entry(odp_buffer_pool_t pool) +{ + return (pool_entry_t *)get_pool_entry(pool_handle_to_index(pool)); +} + +static inline pool_entry_t *odp_buf_to_pool(odp_buffer_hdr_t *buf) +{ + return odp_pool_to_entry(buf->pool_hdl); +} + +static inline size_t odp_buffer_pool_segment_size(odp_buffer_pool_t pool) +{ + return odp_pool_to_entry(pool)->s.seg_size; +} #ifdef __cplusplus } diff --git a/platform/linux-generic/include/odp_packet_internal.h b/platform/linux-generic/include/odp_packet_internal.h index 49c59b2..aa03432 100644 --- a/platform/linux-generic/include/odp_packet_internal.h +++ b/platform/linux-generic/include/odp_packet_internal.h @@ -22,8 +22,9 @@ extern "C" { #include #include #include +#include #include -#include +#include /** * Packet input & protocol flags @@ -43,6 +44,7 @@ typedef union { uint32_t vlan:1; /**< VLAN hdr found */ uint32_t vlan_qinq:1; /**< Stacked VLAN found, QinQ */ + uint32_t snap:1; /**< SNAP */ uint32_t arp:1; /**< ARP */ uint32_t ipv4:1; /**< IPv4 */ @@ -53,7 +55,7 @@ typedef union { uint32_t udp:1; /**< UDP */ uint32_t tcp:1; /**< TCP */ - uint32_t sctp:1; /**< SCTP */ + uint32_t tcpopt:1; /**< TCP Options present */ uint32_t icmp:1; /**< ICMP */ }; } input_flags_t; @@ -69,7 +71,9 @@ typedef union { struct { /* Bitfield flags for each detected error */ + uint32_t app_error:1; /**< Error bit for application use */ uint32_t frame_len:1; /**< Frame length error */ + uint32_t snap_len:1; /**< Snap length error */ uint32_t l2_chksum:1; /**< L2 checksum error, checks TBD */ uint32_t ip_err:1; /**< IP error, checks TBD */ uint32_t tcp_err:1; /**< TCP error, checks TBD */ @@ -88,7 +92,10 @@ typedef union { struct { /* Bitfield flags for each output option */ - uint32_t l4_chksum:1; /**< Request L4 checksum calculation */ + uint32_t l3_chksum_set:1; /**< L3 chksum bit is valid */ + uint32_t l3_chksum:1; /**< L3 chksum override */ + uint32_t l4_chksum_set:1; /**< L3 chksum bit is valid */ + uint32_t l4_chksum:1; /**< L4 chksum override */ }; } output_flags_t; @@ -101,29 +108,33 @@ typedef struct { /* common buffer header */ odp_buffer_hdr_t buf_hdr; - input_flags_t input_flags; error_flags_t error_flags; + input_flags_t input_flags; output_flags_t output_flags; - uint32_t frame_offset; /**< offset to start of frame, even on error */ uint32_t l2_offset; /**< offset to L2 hdr, e.g. Eth */ uint32_t l3_offset; /**< offset to L3 hdr, e.g. IPv4, IPv6 */ uint32_t l4_offset; /**< offset to L4 hdr (TCP, UDP, SCTP, also ICMP) */ + uint32_t payload_offset; /**< offset to payload */ - uint32_t frame_len; + uint32_t vlan_s_tag; /**< Parsed 1st VLAN header (S-TAG) */ + uint32_t vlan_c_tag; /**< Parsed 2nd VLAN header (C-TAG) */ + uint32_t l3_protocol; /**< Parsed L3 protocol */ + uint32_t l3_len; /**< Layer 3 length */ + uint32_t l4_protocol; /**< Parsed L4 protocol */ + uint32_t l4_len; /**< Layer 4 length */ - uint64_t user_ctx; /* user context */ + uint32_t frame_len; + uint32_t headroom; + uint32_t tailroom; odp_pktio_t input; - - uint32_t pad; - uint8_t buf_data[]; /* start of buffer data area */ } odp_packet_hdr_t; -ODP_STATIC_ASSERT(sizeof(odp_packet_hdr_t) == ODP_OFFSETOF(odp_packet_hdr_t, buf_data), - "ODP_PACKET_HDR_T__SIZE_ERR"); -ODP_STATIC_ASSERT(sizeof(odp_packet_hdr_t) % sizeof(uint64_t) == 0, - "ODP_PACKET_HDR_T__SIZE_ERR2"); +typedef struct odp_packet_hdr_stride { + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_packet_hdr_t))]; +} odp_packet_hdr_stride; + /** * Return the packet header @@ -133,10 +144,99 @@ static inline odp_packet_hdr_t *odp_packet_hdr(odp_packet_t pkt) return (odp_packet_hdr_t *)odp_buf_to_hdr((odp_buffer_t)pkt); } +static inline void odp_packet_set_len(odp_packet_t pkt, size_t len) +{ + odp_packet_hdr(pkt)->frame_len = len; +} + +static inline odp_packet_hdr_t *odp_packet_hdr_from_buf_hdr(odp_buffer_hdr_t + *buf_hdr) +{ + return (odp_packet_hdr_t *)buf_hdr; +} + +static inline odp_buffer_hdr_t *odp_packet_hdr_to_buf_hdr(odp_packet_hdr_t *pkt) +{ + return &pkt->buf_hdr; +} + +static inline odp_packet_t odp_packet_from_buf_internal(odp_packet_t buf) +{ + return (odp_packet_t)buf; +} + +static inline odp_buffer_t odp_packet_to_buf_internal(odp_packet_t pkt) +{ + return (odp_buffer_t)pkt; +} + +static inline void packet_init(pool_entry_t *pool, + odp_packet_hdr_t *pkt_hdr, + size_t size) +{ + /* + * Reset parser metadata. Note that we clear via memset to make + * this routine indepenent of any additional adds to packet metadata. + */ + const size_t start_offset = ODP_FIELD_SIZEOF(odp_packet_hdr_t, buf_hdr); + uint8_t *start; + size_t len; + + start = (uint8_t *)pkt_hdr + start_offset; + len = sizeof(odp_packet_hdr_t) - start_offset; + memset(start, 0, len); + + /* + * Packet headroom is set from the pool's headroom + * Packet tailroom is rounded up to fill the last + * segment occupied by the allocated length. + */ + pkt_hdr->frame_len = size; + pkt_hdr->headroom = pool->s.headroom; + pkt_hdr->tailroom = + (pool->s.seg_size * pkt_hdr->buf_hdr.segcount) - + (pool->s.headroom + size); +} + +#define pull_offset(x, len) (x = x < len ? 0 : x - len) + +static inline void push_head(odp_packet_hdr_t *pkt_hdr, size_t len) +{ + pkt_hdr->headroom -= len; + pkt_hdr->frame_len += len; + pkt_hdr->l2_offset += len; + pkt_hdr->l3_offset += len; + pkt_hdr->l4_offset += len; + pkt_hdr->payload_offset += len; +} + +static inline void pull_head(odp_packet_hdr_t *pkt_hdr, size_t len) +{ + pkt_hdr->headroom += len; + pkt_hdr->frame_len -= len; + pull_offset(pkt_hdr->l2_offset, len); + pull_offset(pkt_hdr->l3_offset, len); + pull_offset(pkt_hdr->l4_offset, len); + pull_offset(pkt_hdr->payload_offset, len); +} + +static inline void push_tail(odp_packet_hdr_t *pkt_hdr, size_t len) +{ + pkt_hdr->tailroom -= len; + pkt_hdr->frame_len += len; +} + + +static inline void pull_tail(odp_packet_hdr_t *pkt_hdr, size_t len) +{ + pkt_hdr->tailroom += len; + pkt_hdr->frame_len -= len; +} + /** * Parse packet and set internal metadata */ -void odp_packet_parse(odp_packet_t pkt, size_t len, size_t l2_offset); +void odph_packet_parse(odp_packet_t pkt, size_t len, size_t l2_offset); #ifdef __cplusplus } diff --git a/platform/linux-generic/include/odp_timer_internal.h b/platform/linux-generic/include/odp_timer_internal.h index ad28f53..d06677a 100644 --- a/platform/linux-generic/include/odp_timer_internal.h +++ b/platform/linux-generic/include/odp_timer_internal.h @@ -21,8 +21,9 @@ extern "C" { #include #include #include -#include #include +#include +#include #include struct timeout_t; @@ -48,17 +49,11 @@ typedef struct odp_timeout_hdr_t { timeout_t meta; - uint8_t buf_data[]; } odp_timeout_hdr_t; - - -ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) == - ODP_OFFSETOF(odp_timeout_hdr_t, buf_data), - "ODP_TIMEOUT_HDR_T__SIZE_ERR"); - -ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) % sizeof(uint64_t) == 0, - "ODP_TIMEOUT_HDR_T__SIZE_ERR2"); +typedef struct odp_timeout_hdr_stride { + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_timeout_hdr_t))]; +} odp_timeout_hdr_stride; /** diff --git a/platform/linux-generic/odp_buffer.c b/platform/linux-generic/odp_buffer.c index e54e0e7..3609543 100644 --- a/platform/linux-generic/odp_buffer.c +++ b/platform/linux-generic/odp_buffer.c @@ -5,46 +5,259 @@ */ #include -#include #include +#include +#include #include #include +void *odp_buffer_offset_map(odp_buffer_t buf, size_t offset, size_t *seglen) +{ + odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(buf); -void *odp_buffer_addr(odp_buffer_t buf) + if (offset > buf_hdr->size) + return NULL; + + return buffer_map(buf_hdr, offset, seglen, buf_hdr->size); +} + +void odp_buffer_offset_unmap(odp_buffer_t buf ODP_UNUSED, + size_t offset ODP_UNUSED) { - odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf); +} - return hdr->addr; +size_t odp_buffer_segment_count(odp_buffer_t buf) +{ + return odp_buf_to_hdr(buf)->segcount; } +int odp_buffer_is_segmented(odp_buffer_t buf) +{ + return odp_buf_to_hdr(buf)->segcount > 1; +} -size_t odp_buffer_size(odp_buffer_t buf) +odp_buffer_segment_t odp_buffer_segment_by_index(odp_buffer_t buf, + size_t ndx) { - odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf); + return buffer_segment(odp_buf_to_hdr(buf), ndx); +} - return hdr->size; +odp_buffer_segment_t odp_buffer_segment_next(odp_buffer_t buf, + odp_buffer_segment_t seg) +{ + return segment_next(odp_buf_to_hdr(buf), seg); } +void *odp_buffer_segment_map(odp_buffer_t buf, odp_buffer_segment_t seg, + size_t *seglen) +{ + odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(buf); -int odp_buffer_type(odp_buffer_t buf) + return segment_map(buf_hdr, seg, seglen, buf_hdr->size, 0); +} + +void odp_buffer_segment_unmap(odp_buffer_segment_t seg ODP_UNUSED) { - odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf); +} + +odp_buffer_t odp_buffer_split(odp_buffer_t buf, size_t offset) +{ + size_t size = odp_buffer_size(buf); + odp_buffer_pool_t pool = odp_buffer_pool(buf); + odp_buffer_t splitbuf; + size_t splitsize = size - offset; + + if (offset >= size) + return ODP_BUFFER_INVALID; + + splitbuf = buffer_alloc(pool, splitsize); + + if (splitbuf != ODP_BUFFER_INVALID) { + buffer_copy_to_buffer(splitbuf, 0, buf, splitsize, splitsize); + odp_buffer_trim(buf, splitsize); + } - return hdr->type; + return splitbuf; } +odp_buffer_t odp_buffer_join(odp_buffer_t buf1, odp_buffer_t buf2) +{ + size_t size1 = odp_buffer_size(buf1); + size_t size2 = odp_buffer_size(buf2); + odp_buffer_t joinbuf; + void *udata1, *udata_join; + size_t udata_size1, udata_size_join; -int odp_buffer_is_valid(odp_buffer_t buf) + joinbuf = buffer_alloc(odp_buffer_pool(buf1), size1 + size2); + + if (joinbuf != ODP_BUFFER_INVALID) { + buffer_copy_to_buffer(joinbuf, 0, buf1, 0, size1); + buffer_copy_to_buffer(joinbuf, size1, buf2, 0, size2); + + /* Copy user metadata if present */ + udata1 = odp_buffer_udata(buf1, &udata_size1); + udata_join = odp_buffer_udata(joinbuf, &udata_size_join); + + if (udata1 != NULL && udata_join != NULL) + memcpy(udata_join, udata1, + udata_size_join > udata_size1 ? + udata_size1 : udata_size_join); + + odp_buffer_free(buf1); + odp_buffer_free(buf2); + } + + return joinbuf; +} + +odp_buffer_t odp_buffer_trim(odp_buffer_t buf, size_t bytes) { - odp_buffer_bits_t handle; + odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(buf); + + if (bytes >= buf_hdr->size) + return ODP_BUFFER_INVALID; + + buf_hdr->size -= bytes; + size_t bufsegs = buf_hdr->size / buf_hdr->segsize; + + if (buf_hdr->size % buf_hdr->segsize > 0) + bufsegs++; + + pool_entry_t *pool = odp_pool_to_entry(buf_hdr->pool_hdl); - handle.u32 = buf; + /* Return excess segments back to block list */ + while (bufsegs > buf_hdr->segcount) + ret_blk(&pool->s, buf_hdr->addr[--buf_hdr->segcount]); - return (handle.index != ODP_BUFFER_INVALID_INDEX); + return buf; } +odp_buffer_t odp_buffer_extend(odp_buffer_t buf, size_t bytes) +{ + odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(buf); + + size_t lastseg = buf_hdr->size % buf_hdr->segsize; + + if (bytes <= buf_hdr->segsize - lastseg) { + buf_hdr->size += bytes; + return buf; + } + + pool_entry_t *pool = odp_pool_to_entry(buf_hdr->pool_hdl); + int extsize = buf_hdr->size + bytes; + + /* Extend buffer by adding additional segments to it */ + if (extsize > ODP_CONFIG_BUF_MAX_SIZE || pool->s.flags.unsegmented) + return ODP_BUFFER_INVALID; + + size_t segcount = buf_hdr->segcount; + size_t extsegs = extsize / buf_hdr->segsize; + + if (extsize % buf_hdr->segsize > 0) + extsize++; + + while (extsegs > buf_hdr->segcount) { + void *newblk = get_blk(&pool->s); + + /* If we fail to get all the blocks we need, back out */ + if (newblk == NULL) { + while (segcount > buf_hdr->segcount) + ret_blk(&pool->s, buf_hdr->addr[--segcount]); + return ODP_BUFFER_INVALID; + } + + buf_hdr->addr[segcount++] = newblk; + } + + buf_hdr->segcount = segcount; + buf_hdr->size = extsize; + + return buf; +} + +odp_buffer_t odp_buffer_clone(odp_buffer_t buf) +{ + return odp_buffer_copy(buf, odp_buf_to_hdr(buf)->pool_hdl); +} + +odp_buffer_t odp_buffer_copy(odp_buffer_t buf, odp_buffer_pool_t pool) +{ + odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(buf); + size_t len = buf_hdr->size; + odp_buffer_t cpybuf = buffer_alloc(pool, len); + + if (cpybuf == ODP_BUFFER_INVALID) + return ODP_BUFFER_INVALID; + + if (buffer_copy_to_buffer(cpybuf, 0, buf, 0, len) == 0) + return cpybuf; + + odp_buffer_free(cpybuf); + return ODP_BUFFER_INVALID; +} + +int buffer_copy_to_buffer(odp_buffer_t dstbuf, size_t dstoffset, + odp_buffer_t srcbuf, size_t srcoffset, + size_t len) +{ + void *dstmap; + void *srcmap; + size_t cpylen, minseg, dstseglen, srcseglen; + + while (len > 0) { + dstmap = odp_buffer_offset_map(dstbuf, dstoffset, &dstseglen); + srcmap = odp_buffer_offset_map(srcbuf, srcoffset, &srcseglen); + if (dstmap == NULL || srcmap == NULL) + return -1; + minseg = dstseglen > srcseglen ? srcseglen : dstseglen; + cpylen = len > minseg ? minseg : len; + memcpy(dstmap, srcmap, cpylen); + srcoffset += cpylen; + dstoffset += cpylen; + len -= cpylen; + } + + return 0; +} + +void *odp_buffer_addr(odp_buffer_t buf) +{ + return odp_buf_to_hdr(buf)->addr[0]; +} + +odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf) +{ + return odp_buf_to_hdr(buf)->pool_hdl; +} + +size_t odp_buffer_size(odp_buffer_t buf) +{ + return odp_buf_to_hdr(buf)->size; +} + +odp_buffer_type_e odp_buffer_type(odp_buffer_t buf) +{ + return odp_buf_to_hdr(buf)->type; +} + +void *odp_buffer_udata(odp_buffer_t buf, size_t *usize) +{ + odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf); + + *usize = hdr->udata_size; + return hdr->udata_addr; +} + +void *odp_buffer_udata_addr(odp_buffer_t buf) +{ + return odp_buf_to_hdr(buf)->udata_addr; +} + +int odp_buffer_is_valid(odp_buffer_t buf) +{ + return validate_buf(buf) != NULL; +} int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf) { @@ -63,27 +276,13 @@ int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf) len += snprintf(&str[len], n-len, " pool %i\n", hdr->pool_hdl); len += snprintf(&str[len], n-len, - " index %"PRIu32"\n", hdr->index); - len += snprintf(&str[len], n-len, - " phy_addr %"PRIu64"\n", hdr->phys_addr); - len += snprintf(&str[len], n-len, - " addr %p\n", hdr->addr); + " addr %p\n", hdr->addr[0]); len += snprintf(&str[len], n-len, " size %zu\n", hdr->size); len += snprintf(&str[len], n-len, - " cur_offset %zu\n", hdr->cur_offset); - len += snprintf(&str[len], n-len, " ref_count %i\n", hdr->ref_count); len += snprintf(&str[len], n-len, " type %i\n", hdr->type); - len += snprintf(&str[len], n-len, - " Scatter list\n"); - len += snprintf(&str[len], n-len, - " num_bufs %i\n", hdr->scatter.num_bufs); - len += snprintf(&str[len], n-len, - " pos %i\n", hdr->scatter.pos); - len += snprintf(&str[len], n-len, - " total_len %zu\n", hdr->scatter.total_len); return len; } @@ -98,11 +297,5 @@ void odp_buffer_print(odp_buffer_t buf) len = odp_buffer_snprint(str, max_len-1, buf); str[len] = 0; - printf("\n%s\n", str); -} - -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t buf_src) -{ - (void)buf_dst; - (void)buf_src; + ODP_LOG(ODP_LOG_DBG, "\n%s\n", str); } diff --git a/platform/linux-generic/odp_buffer_pool.c b/platform/linux-generic/odp_buffer_pool.c index a48d7d6..4443883 100644 --- a/platform/linux-generic/odp_buffer_pool.c +++ b/platform/linux-generic/odp_buffer_pool.c @@ -6,8 +6,9 @@ #include #include -#include #include +#include +#include #include #include #include @@ -16,6 +17,7 @@ #include #include #include +#include #include #include @@ -33,40 +35,29 @@ #define LOCK_INIT(a) odp_spinlock_init(a) #endif - -#if ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS -#error ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS -#endif - -#define NULL_INDEX ((uint32_t)-1) - -union buffer_type_any_u { +typedef union buffer_type_any_u { odp_buffer_hdr_t buf; odp_packet_hdr_t pkt; odp_timeout_hdr_t tmo; -}; - -ODP_STATIC_ASSERT((sizeof(union buffer_type_any_u) % 8) == 0, - "BUFFER_TYPE_ANY_U__SIZE_ERR"); +} odp_anybuf_t; /* Any buffer type header */ typedef struct { union buffer_type_any_u any_hdr; /* any buffer type */ - uint8_t buf_data[]; /* start of buffer data area */ } odp_any_buffer_hdr_t; +typedef struct odp_any_hdr_stride { + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_any_buffer_hdr_t))]; +} odp_any_hdr_stride; -typedef union pool_entry_u { - struct pool_entry_s s; - - uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct pool_entry_s))]; - -} pool_entry_t; +#if ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS +#error ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS +#endif +#define NULL_INDEX ((uint32_t)-1) typedef struct pool_table_t { pool_entry_t pool[ODP_CONFIG_BUFFER_POOLS]; - } pool_table_t; @@ -76,39 +67,6 @@ static pool_table_t *pool_tbl; /* Pool entry pointers (for inlining) */ void *pool_entry_ptr[ODP_CONFIG_BUFFER_POOLS]; - -static __thread odp_buffer_chunk_hdr_t *local_chunk[ODP_CONFIG_BUFFER_POOLS]; - - -static inline odp_buffer_pool_t pool_index_to_handle(uint32_t pool_id) -{ - return pool_id + 1; -} - - -static inline uint32_t pool_handle_to_index(odp_buffer_pool_t pool_hdl) -{ - return pool_hdl -1; -} - - -static inline void set_handle(odp_buffer_hdr_t *hdr, - pool_entry_t *pool, uint32_t index) -{ - odp_buffer_pool_t pool_hdl = pool->s.pool_hdl; - uint32_t pool_id = pool_handle_to_index(pool_hdl); - - if (pool_id >= ODP_CONFIG_BUFFER_POOLS) - ODP_ABORT("set_handle: Bad pool handle %u\n", pool_hdl); - - if (index > ODP_BUFFER_MAX_INDEX) - ODP_ERR("set_handle: Bad buffer index\n"); - - hdr->handle.pool_id = pool_id; - hdr->handle.index = index; -} - - int odp_buffer_pool_init_global(void) { uint32_t i; @@ -142,269 +100,173 @@ int odp_buffer_pool_init_global(void) return 0; } - -static odp_buffer_hdr_t *index_to_hdr(pool_entry_t *pool, uint32_t index) -{ - odp_buffer_hdr_t *hdr; - - hdr = (odp_buffer_hdr_t *)(pool->s.buf_base + index * pool->s.buf_size); - return hdr; -} - - -static void add_buf_index(odp_buffer_chunk_hdr_t *chunk_hdr, uint32_t index) -{ - uint32_t i = chunk_hdr->chunk.num_bufs; - chunk_hdr->chunk.buf_index[i] = index; - chunk_hdr->chunk.num_bufs++; -} - - -static uint32_t rem_buf_index(odp_buffer_chunk_hdr_t *chunk_hdr) +/** + * Buffer pool creation + */ +odp_buffer_pool_t odp_buffer_pool_create(const char *name, + odp_buffer_pool_param_t *params, + odp_buffer_pool_init_t *init_args) { - uint32_t index; + odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID; + pool_entry_t *pool; uint32_t i; + odp_buffer_pool_init_t *init_params; - i = chunk_hdr->chunk.num_bufs - 1; - index = chunk_hdr->chunk.buf_index[i]; - chunk_hdr->chunk.num_bufs--; - return index; -} - - -static odp_buffer_chunk_hdr_t *next_chunk(pool_entry_t *pool, - odp_buffer_chunk_hdr_t *chunk_hdr) -{ - uint32_t index; - - index = chunk_hdr->chunk.buf_index[ODP_BUFS_PER_CHUNK-1]; - if (index == NULL_INDEX) - return NULL; - else - return (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, index); -} - - -static odp_buffer_chunk_hdr_t *rem_chunk(pool_entry_t *pool) -{ - odp_buffer_chunk_hdr_t *chunk_hdr; - - chunk_hdr = pool->s.head; - if (chunk_hdr == NULL) { - /* Pool is empty */ - return NULL; - } - - pool->s.head = next_chunk(pool, chunk_hdr); - pool->s.free_bufs -= ODP_BUFS_PER_CHUNK; - - /* unlink */ - rem_buf_index(chunk_hdr); - return chunk_hdr; -} - + /* Default initialization paramters */ + static odp_buffer_pool_init_t default_init_params = { + .udata_size = 0, + .buf_init = NULL, + .buf_init_arg = NULL, + }; -static void add_chunk(pool_entry_t *pool, odp_buffer_chunk_hdr_t *chunk_hdr) -{ - if (pool->s.head) /* link pool head to the chunk */ - add_buf_index(chunk_hdr, pool->s.head->buf_hdr.index); - else - add_buf_index(chunk_hdr, NULL_INDEX); + /* Handle NULL input parameters */ + if (params == NULL) + return ODP_BUFFER_POOL_INVALID; - pool->s.head = chunk_hdr; - pool->s.free_bufs += ODP_BUFS_PER_CHUNK; -} + init_params = init_args == NULL ? &default_init_params : init_args; + /* Restriction for v1.0: Only packet buffer types can be segmented */ + if (params->buf_type != ODP_BUFFER_TYPE_PACKET) + params->buf_opts |= ODP_BUFFER_OPTS_UNSEGMENTED; -static void check_align(pool_entry_t *pool, odp_buffer_hdr_t *hdr) -{ - if (!ODP_ALIGNED_CHECK_POWER_2(hdr->addr, pool->s.user_align)) { - ODP_ABORT("check_align: user data align error %p, align %zu\n", - hdr->addr, pool->s.user_align); - } + int unsegmented = ((params->buf_opts & ODP_BUFFER_OPTS_UNSEGMENTED) == + ODP_BUFFER_OPTS_UNSEGMENTED); - if (!ODP_ALIGNED_CHECK_POWER_2(hdr, ODP_CACHE_LINE_SIZE)) { - ODP_ABORT("check_align: hdr align error %p, align %i\n", - hdr, ODP_CACHE_LINE_SIZE); - } -} + uint32_t udata_stride = + ODP_CACHE_LINE_SIZE_ROUNDUP(init_params->udata_size); + uint32_t blk_size, buf_stride; -static void fill_hdr(void *ptr, pool_entry_t *pool, uint32_t index, - int buf_type) -{ - odp_buffer_hdr_t *hdr = (odp_buffer_hdr_t *)ptr; - size_t size = pool->s.hdr_size; - uint8_t *buf_data; - - if (buf_type == ODP_BUFFER_TYPE_CHUNK) - size = sizeof(odp_buffer_chunk_hdr_t); - - switch (pool->s.buf_type) { - odp_raw_buffer_hdr_t *raw_hdr; - odp_packet_hdr_t *packet_hdr; - odp_timeout_hdr_t *tmo_hdr; - odp_any_buffer_hdr_t *any_hdr; - + switch (params->buf_type) { case ODP_BUFFER_TYPE_RAW: - raw_hdr = ptr; - buf_data = raw_hdr->buf_data; + blk_size = params->buf_size; + buf_stride = sizeof(odp_buffer_hdr_stride); break; + case ODP_BUFFER_TYPE_PACKET: - packet_hdr = ptr; - buf_data = packet_hdr->buf_data; + if (unsegmented) + blk_size = + ODP_CACHE_LINE_SIZE_ROUNDUP(params->buf_size); + else + blk_size = ODP_ALIGN_ROUNDUP(params->buf_size, + ODP_CONFIG_BUF_SEG_SIZE); + buf_stride = sizeof(odp_packet_hdr_stride); break; + case ODP_BUFFER_TYPE_TIMEOUT: - tmo_hdr = ptr; - buf_data = tmo_hdr->buf_data; + blk_size = 0; /* Timeouts have no block data, only metadata */ + buf_stride = sizeof(odp_timeout_hdr_stride); break; + case ODP_BUFFER_TYPE_ANY: - any_hdr = ptr; - buf_data = any_hdr->buf_data; + if (unsegmented) + blk_size = + ODP_CACHE_LINE_SIZE_ROUNDUP(params->buf_size); + else + blk_size = ODP_ALIGN_ROUNDUP(params->buf_size, + ODP_CONFIG_BUF_SEG_SIZE); + buf_stride = sizeof(odp_any_hdr_stride); break; + default: - ODP_ABORT("Bad buffer type\n"); + return ODP_BUFFER_POOL_INVALID; } - memset(hdr, 0, size); - - set_handle(hdr, pool, index); - - hdr->addr = &buf_data[pool->s.buf_offset - pool->s.hdr_size]; - hdr->index = index; - hdr->size = pool->s.user_size; - hdr->pool_hdl = pool->s.pool_hdl; - hdr->type = buf_type; - - check_align(pool, hdr); -} - + /* Find an unused buffer pool slot and initialize it as requested */ + for (i = 0; i < ODP_CONFIG_BUFFER_POOLS; i++) { + pool = get_pool_entry(i); -static void link_bufs(pool_entry_t *pool) -{ - odp_buffer_chunk_hdr_t *chunk_hdr; - size_t hdr_size; - size_t data_size; - size_t data_align; - size_t tot_size; - size_t offset; - size_t min_size; - uint64_t pool_size; - uintptr_t buf_base; - uint32_t index; - uintptr_t pool_base; - int buf_type; - - buf_type = pool->s.buf_type; - data_size = pool->s.user_size; - data_align = pool->s.user_align; - pool_size = pool->s.pool_size; - pool_base = (uintptr_t) pool->s.pool_base_addr; - - if (buf_type == ODP_BUFFER_TYPE_RAW) { - hdr_size = sizeof(odp_raw_buffer_hdr_t); - } else if (buf_type == ODP_BUFFER_TYPE_PACKET) { - hdr_size = sizeof(odp_packet_hdr_t); - } else if (buf_type == ODP_BUFFER_TYPE_TIMEOUT) { - hdr_size = sizeof(odp_timeout_hdr_t); - } else if (buf_type == ODP_BUFFER_TYPE_ANY) { - hdr_size = sizeof(odp_any_buffer_hdr_t); - } else - ODP_ABORT("odp_buffer_pool_create: Bad type %i\n", buf_type); - - - /* Chunk must fit into buffer data area.*/ - min_size = sizeof(odp_buffer_chunk_hdr_t) - hdr_size; - if (data_size < min_size) - data_size = min_size; - - /* Roundup data size to full cachelines */ - data_size = ODP_CACHE_LINE_SIZE_ROUNDUP(data_size); - - /* Min cacheline alignment for buffer header and data */ - data_align = ODP_CACHE_LINE_SIZE_ROUNDUP(data_align); - offset = ODP_CACHE_LINE_SIZE_ROUNDUP(hdr_size); - - /* Multiples of cacheline size */ - if (data_size > data_align) - tot_size = data_size + offset; - else - tot_size = data_align + offset; - - /* First buffer */ - buf_base = ODP_ALIGN_ROUNDUP(pool_base + offset, data_align) - offset; - - pool->s.hdr_size = hdr_size; - pool->s.buf_base = buf_base; - pool->s.buf_size = tot_size; - pool->s.buf_offset = offset; - index = 0; - - chunk_hdr = (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, index); - pool->s.head = NULL; - pool_size -= buf_base - pool_base; - - while (pool_size > ODP_BUFS_PER_CHUNK * tot_size) { - int i; - - fill_hdr(chunk_hdr, pool, index, ODP_BUFFER_TYPE_CHUNK); - - index++; - - for (i = 0; i < ODP_BUFS_PER_CHUNK - 1; i++) { - odp_buffer_hdr_t *hdr = index_to_hdr(pool, index); - - fill_hdr(hdr, pool, index, buf_type); - - add_buf_index(chunk_hdr, index); - index++; + LOCK(&pool->s.lock); + if (pool->s.shm != ODP_SHM_INVALID) { + UNLOCK(&pool->s.lock); + continue; } - add_chunk(pool, chunk_hdr); - - chunk_hdr = (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, - index); - pool->s.num_bufs += ODP_BUFS_PER_CHUNK; - pool_size -= ODP_BUFS_PER_CHUNK * tot_size; - } -} - - -odp_buffer_pool_t odp_buffer_pool_create(const char *name, - void *base_addr, uint64_t size, - size_t buf_size, size_t buf_align, - int buf_type) -{ - odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID; - pool_entry_t *pool; - uint32_t i; + /* found free pool */ + size_t block_size, mdata_size, udata_size; - for (i = 0; i < ODP_CONFIG_BUFFER_POOLS; i++) { - pool = get_pool_entry(i); + strncpy(pool->s.name, name, + ODP_BUFFER_POOL_NAME_LEN - 1); + pool->s.name[ODP_BUFFER_POOL_NAME_LEN - 1] = 0; - LOCK(&pool->s.lock); + pool->s.params = *params; + pool->s.init_params = *init_params; - if (pool->s.buf_base == 0) { - /* found free pool */ + mdata_size = params->buf_num * buf_stride; + udata_size = params->buf_num * udata_stride; + block_size = params->buf_num * blk_size; - strncpy(pool->s.name, name, - ODP_BUFFER_POOL_NAME_LEN - 1); - pool->s.name[ODP_BUFFER_POOL_NAME_LEN - 1] = 0; - pool->s.pool_base_addr = base_addr; - pool->s.pool_size = size; - pool->s.user_size = buf_size; - pool->s.user_align = buf_align; - pool->s.buf_type = buf_type; + pool->s.pool_size = ODP_PAGE_SIZE_ROUNDUP(mdata_size + + udata_size + + block_size); - link_bufs(pool); + pool->s.shm = odp_shm_reserve(pool->s.name, pool->s.pool_size, + ODP_PAGE_SIZE, 0); + if (pool->s.shm == ODP_SHM_INVALID) { UNLOCK(&pool->s.lock); - - pool_hdl = pool->s.pool_hdl; - break; + return ODP_BUFFER_POOL_INVALID; } + /* Now safe to unlock since pool entry has been allocated */ UNLOCK(&pool->s.lock); + + pool->s.pool_base_addr = (uint8_t *)odp_shm_addr(pool->s.shm); + pool->s.flags.unsegmented = unsegmented; + pool->s.seg_size = unsegmented ? + blk_size : ODP_CONFIG_BUF_SEG_SIZE; + uint8_t *udata_base_addr = pool->s.pool_base_addr + mdata_size; + uint8_t *block_base_addr = udata_base_addr + udata_size; + + /* bufcount will decrement down to 0 as we populate freelist */ + pool->s.bufcount = params->buf_num; + pool->s.buf_stride = buf_stride; + pool->s.udata_stride = udata_stride; + pool->s.high_wm = 0; + pool->s.low_wm = 0; + pool->s.headroom = 0; + pool->s.tailroom = 0; + pool->s.buf_freelist = NULL; + pool->s.blk_freelist = NULL; + + uint8_t *buf = pool->s.udata_base_addr - buf_stride; + uint8_t *udat = (udata_stride == 0) ? NULL : + block_base_addr - udata_stride; + + /* Init buffer common header and add to pool buffer freelist */ + do { + odp_buffer_hdr_t *tmp = + (odp_buffer_hdr_t *)(void *)buf; + + /* Initialize buffer metadata */ + tmp->buf_hdl.handle = odp_buffer_encode_handle(tmp); + tmp->size = 0; + tmp->ref_count = 0; + tmp->type = params->buf_type; + tmp->pool_hdl = pool->s.pool_hdl; + tmp->udata_addr = (void *)udat; + tmp->udata_size = init_params->udata_size; + tmp->segcount = 0; + tmp->segsize = pool->s.seg_size; + + /* Push buffer onto pool's freelist */ + ret_buf(&pool->s, tmp); + buf -= buf_stride; + udat -= udata_stride; + } while (buf >= pool->s.pool_base_addr); + + /* Form block freelist for pool */ + uint8_t *blk = pool->s.pool_base_addr + pool->s.pool_size - + pool->s.seg_size; + + if (blk_size > 0) + do { + ret_blk(&pool->s, blk); + blk -= pool->s.seg_size; + } while (blk >= block_base_addr); + + pool_hdl = pool->s.pool_hdl; + break; } return pool_hdl; @@ -431,144 +293,181 @@ odp_buffer_pool_t odp_buffer_pool_lookup(const char *name) return ODP_BUFFER_POOL_INVALID; } - -odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl) +odp_buffer_pool_t odp_buffer_pool_next(odp_buffer_pool_t pool_hdl, + char *name, + size_t *udata_size, + odp_buffer_pool_param_t *params, + int *predef) { - pool_entry_t *pool; - odp_buffer_chunk_hdr_t *chunk; - odp_buffer_bits_t handle; - uint32_t pool_id = pool_handle_to_index(pool_hdl); - - pool = get_pool_entry(pool_id); - chunk = local_chunk[pool_id]; - - if (chunk == NULL) { - LOCK(&pool->s.lock); - chunk = rem_chunk(pool); - UNLOCK(&pool->s.lock); + pool_entry_t *next_pool; + uint32_t pool_id; - if (chunk == NULL) - return ODP_BUFFER_INVALID; + /* NULL input means first element */ + if (pool_hdl == ODP_BUFFER_POOL_INVALID) { + pool_id = 0; + next_pool = get_pool_entry(pool_id); + } else { + pool_id = pool_handle_to_index(pool_hdl); - local_chunk[pool_id] = chunk; + if (++pool_id == ODP_CONFIG_BUFFER_POOLS) + return ODP_BUFFER_POOL_INVALID; + else + next_pool = get_pool_entry(pool_id); } - if (chunk->chunk.num_bufs == 0) { - /* give the chunk buffer */ - local_chunk[pool_id] = NULL; - chunk->buf_hdr.type = pool->s.buf_type; + /* Only interested in pools that exist */ + while (next_pool->s.shm == ODP_SHM_INVALID) { + if (++pool_id == ODP_CONFIG_BUFFER_POOLS) + return ODP_BUFFER_POOL_INVALID; + else + next_pool = get_pool_entry(pool_id); + } - handle = chunk->buf_hdr.handle; - } else { - odp_buffer_hdr_t *hdr; - uint32_t index; - index = rem_buf_index(chunk); - hdr = index_to_hdr(pool, index); + /* Found the next pool, so return info about it */ + strncpy(name, next_pool->s.name, ODP_BUFFER_POOL_NAME_LEN - 1); + name[ODP_BUFFER_POOL_NAME_LEN - 1] = 0; - handle = hdr->handle; - } + *udata_size = next_pool->s.init_params.udata_size; + *params = next_pool->s.params; + *predef = next_pool->s.flags.predefined; - return handle.u32; + return next_pool->s.pool_hdl; } - -void odp_buffer_free(odp_buffer_t buf) +int odp_buffer_pool_destroy(odp_buffer_pool_t pool_hdl) { - odp_buffer_hdr_t *hdr; - uint32_t pool_id; - pool_entry_t *pool; - odp_buffer_chunk_hdr_t *chunk_hdr; + pool_entry_t *pool = odp_pool_to_entry(pool_hdl); - hdr = odp_buf_to_hdr(buf); - pool_id = pool_handle_to_index(hdr->pool_hdl); - pool = get_pool_entry(pool_id); - chunk_hdr = local_chunk[pool_id]; + if (pool == NULL) + return -1; - if (chunk_hdr && chunk_hdr->chunk.num_bufs == ODP_BUFS_PER_CHUNK - 1) { - /* Current chunk is full. Push back to the pool */ - LOCK(&pool->s.lock); - add_chunk(pool, chunk_hdr); + LOCK(&pool->s.lock); + + if (pool->s.shm == ODP_SHM_INVALID || + pool->s.bufcount > 0 || + pool->s.flags.predefined) { UNLOCK(&pool->s.lock); - chunk_hdr = NULL; + return -1; } - if (chunk_hdr == NULL) { - /* Use this buffer */ - chunk_hdr = (odp_buffer_chunk_hdr_t *)hdr; - local_chunk[pool_id] = chunk_hdr; - chunk_hdr->chunk.num_bufs = 0; - } else { - /* Add to current chunk */ - add_buf_index(chunk_hdr, hdr->index); - } + odp_shm_free(pool->s.shm); + + pool->s.shm = ODP_SHM_INVALID; + UNLOCK(&pool->s.lock); + + return 0; } +size_t odp_buffer_pool_headroom(odp_buffer_pool_t pool_hdl) +{ + return odp_pool_to_entry(pool_hdl)->s.headroom; +} -odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf) +int odp_buffer_pool_set_headroom(odp_buffer_pool_t pool_hdl, size_t hr) { - odp_buffer_hdr_t *hdr; + pool_entry_t *pool = odp_pool_to_entry(pool_hdl); - hdr = odp_buf_to_hdr(buf); - return hdr->pool_hdl; + if (hr >= pool->s.seg_size/2) { + return -1; + } else { + pool->s.headroom = hr; + return 0; + } } - -void odp_buffer_pool_print(odp_buffer_pool_t pool_hdl) +size_t odp_buffer_pool_tailroom(odp_buffer_pool_t pool_hdl) { - pool_entry_t *pool; - odp_buffer_chunk_hdr_t *chunk_hdr; - uint32_t i; - uint32_t pool_id; + return odp_pool_to_entry(pool_hdl)->s.tailroom; +} - pool_id = pool_handle_to_index(pool_hdl); - pool = get_pool_entry(pool_id); +int odp_buffer_pool_set_tailroom(odp_buffer_pool_t pool_hdl, size_t tr) +{ + pool_entry_t *pool = odp_pool_to_entry(pool_hdl); - printf("Pool info\n"); - printf("---------\n"); - printf(" pool %i\n", pool->s.pool_hdl); - printf(" name %s\n", pool->s.name); - printf(" pool base %p\n", pool->s.pool_base_addr); - printf(" buf base 0x%"PRIxPTR"\n", pool->s.buf_base); - printf(" pool size 0x%"PRIx64"\n", pool->s.pool_size); - printf(" buf size %zu\n", pool->s.user_size); - printf(" buf align %zu\n", pool->s.user_align); - printf(" hdr size %zu\n", pool->s.hdr_size); - printf(" alloc size %zu\n", pool->s.buf_size); - printf(" offset to hdr %zu\n", pool->s.buf_offset); - printf(" num bufs %"PRIu64"\n", pool->s.num_bufs); - printf(" free bufs %"PRIu64"\n", pool->s.free_bufs); - - /* first chunk */ - chunk_hdr = pool->s.head; - - if (chunk_hdr == NULL) { - ODP_ERR(" POOL EMPTY\n"); - return; + if (tr >= pool->s.seg_size/2) { + return -1; + } else { + pool->s.tailroom = tr; + return 0; } +} - printf("\n First chunk\n"); +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool_hdl, size_t size) +{ + pool_entry_t *pool = odp_pool_to_entry(pool_hdl); + size_t totsize = pool->s.headroom + size + pool->s.tailroom; + odp_anybuf_t *buf; + uint8_t *blk; + + if ((pool->s.flags.unsegmented && totsize > pool->s.seg_size) || + (!pool->s.flags.unsegmented && totsize > ODP_CONFIG_BUF_MAX_SIZE)) + return ODP_BUFFER_INVALID; + + buf = (odp_anybuf_t *)(void *)get_buf(&pool->s); + + if (buf == NULL) + return ODP_BUFFER_INVALID; + + /* Get blocks for this buffer, if pool uses application data */ + if (buf->buf.segsize > 0) + do { + blk = get_blk(&pool->s); + if (blk == NULL) { + ret_buf(&pool->s, &buf->buf); + return ODP_BUFFER_INVALID; + } + buf->buf.addr[buf->buf.segcount++] = blk; + totsize = totsize < pool->s.seg_size ? 0 : + totsize - pool->s.seg_size; + } while (totsize > 0); + + switch (buf->buf.type) { + case ODP_BUFFER_TYPE_RAW: + break; - for (i = 0; i < chunk_hdr->chunk.num_bufs - 1; i++) { - uint32_t index; - odp_buffer_hdr_t *hdr; + case ODP_BUFFER_TYPE_PACKET: + packet_init(pool, &buf->pkt, size); + if (pool->s.init_params.buf_init != NULL) + (*pool->s.init_params.buf_init) + (buf->buf.buf_hdl.handle, + pool->s.init_params.buf_init_arg); + break; - index = chunk_hdr->chunk.buf_index[i]; - hdr = index_to_hdr(pool, index); + case ODP_BUFFER_TYPE_TIMEOUT: + break; - printf(" [%i] addr %p, id %"PRIu32"\n", i, hdr->addr, index); + default: + ret_buf(&pool->s, &buf->buf); + return ODP_BUFFER_INVALID; } - printf(" [%i] addr %p, id %"PRIu32"\n", i, chunk_hdr->buf_hdr.addr, - chunk_hdr->buf_hdr.index); + return odp_hdr_to_buf(&buf->buf); +} - /* next chunk */ - chunk_hdr = next_chunk(pool, chunk_hdr); +odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl) +{ + return buffer_alloc(pool_hdl, + odp_pool_to_entry(pool_hdl)->s.params.buf_size); +} - if (chunk_hdr) { - printf(" Next chunk\n"); - printf(" addr %p, id %"PRIu32"\n", chunk_hdr->buf_hdr.addr, - chunk_hdr->buf_hdr.index); - } +void odp_buffer_free(odp_buffer_t buf) +{ + odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf); + pool_entry_t *pool = odp_buf_to_pool(hdr); + ret_buf(&pool->s, hdr); +} + +void odp_buffer_pool_print(odp_buffer_pool_t pool_hdl) +{ + pool_entry_t *pool; + uint32_t pool_id; + + pool_id = pool_handle_to_index(pool_hdl); + pool = get_pool_entry(pool_id); - printf("\n"); + ODP_LOG(ODP_LOG_DBG, "Pool info\n"); + ODP_LOG(ODP_LOG_DBG, "---------\n"); + ODP_LOG(ODP_LOG_DBG, " pool %i\n", pool->s.pool_hdl); + ODP_LOG(ODP_LOG_DBG, " name %s\n", pool->s.name); + ODP_LOG(ODP_LOG_DBG, " pool base %p\n", pool->s.pool_base_addr); } diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index 82ea879..0d69b39 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -11,141 +11,927 @@ #include #include +#include +#include #include #include -static inline uint8_t parse_ipv4(odp_packet_hdr_t *pkt_hdr, - odph_ipv4hdr_t *ipv4, size_t *offset_out); -static inline uint8_t parse_ipv6(odp_packet_hdr_t *pkt_hdr, - odph_ipv6hdr_t *ipv6, size_t *offset_out); - void odp_packet_init(odp_packet_t pkt) { odp_packet_hdr_t *const pkt_hdr = odp_packet_hdr(pkt); - const size_t start_offset = ODP_FIELD_SIZEOF(odp_packet_hdr_t, buf_hdr); - uint8_t *start; - size_t len; + pool_entry_t *pool = odp_buf_to_pool(&pkt_hdr->buf_hdr); + + packet_init(pool, pkt_hdr, 0); +} + +odp_packet_t odp_packet_from_buffer(odp_buffer_t buf) +{ + return (odp_packet_t)buf; +} + +odp_buffer_t odp_packet_to_buffer(odp_packet_t pkt) +{ + return (odp_buffer_t)pkt; +} + +size_t odp_packet_len(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->frame_len; +} + +odp_pktio_t odp_packet_input(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input; +} + +void *odp_packet_offset_map(odp_packet_t pkt, size_t offset, size_t *seglen) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + if (offset >= pkt_hdr->frame_len) + return NULL; + + return buffer_map(&pkt_hdr->buf_hdr, + pkt_hdr->headroom + offset, + seglen, pkt_hdr->frame_len); +} + +void odp_packet_offset_unmap(odp_packet_t pkt ODP_UNUSED, + size_t offset ODP_UNUSED) +{ +} + +void *odp_packet_map(odp_packet_t pkt, size_t *seglen) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + return buffer_map(&pkt_hdr->buf_hdr, 0, seglen, pkt_hdr->frame_len); +} + +void *odp_packet_addr(odp_packet_t pkt) +{ + size_t seglen; + return odp_packet_map(pkt, &seglen); +} + +odp_buffer_pool_t odp_packet_pool(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->buf_hdr.pool_hdl; +} + +odp_packet_segment_t odp_packet_segment_by_index(odp_packet_t pkt, + size_t ndx) +{ + return (odp_packet_segment_t) + buffer_segment(&odp_packet_hdr(pkt)->buf_hdr, ndx); +} + +odp_packet_segment_t odp_packet_segment_next(odp_packet_t pkt, + odp_packet_segment_t seg) +{ + return (odp_packet_segment_t) + segment_next(&odp_packet_hdr(pkt)->buf_hdr, seg); +} + +void *odp_packet_segment_map(odp_packet_t pkt, odp_packet_segment_t seg, + size_t *seglen) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + return segment_map(&pkt_hdr->buf_hdr, seg, + seglen, pkt_hdr->frame_len, pkt_hdr->headroom); +} + +void odp_packet_segment_unmap(odp_packet_segment_t seg ODP_UNUSED) +{ +} + +void *odp_packet_udata(odp_packet_t pkt, size_t *len) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + *len = pkt_hdr->buf_hdr.udata_size; + return pkt_hdr->buf_hdr.udata_addr; +} + +void *odp_packet_udata_addr(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->buf_hdr.udata_addr; +} + +void *odp_packet_l2_map(odp_packet_t pkt, size_t *seglen) +{ + return odp_packet_offset_map(pkt, odp_packet_l2_offset(pkt), seglen); +} + +size_t odp_packet_l2_offset(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->l2_offset; +} + +int odp_packet_set_l2_offset(odp_packet_t pkt, size_t offset) +{ + odp_packet_hdr_t *hdr = odp_packet_hdr(pkt); + + if (offset >= hdr->frame_len) + return -1; + + hdr->l2_offset = offset; + return 0; +} + +void *odp_packet_l3_map(odp_packet_t pkt, size_t *seglen) +{ + return odp_packet_offset_map(pkt, odp_packet_l3_offset(pkt), seglen); +} + +size_t odp_packet_l3_offset(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->l3_offset; +} + +int odp_packet_set_l3_offset(odp_packet_t pkt, size_t offset) +{ + odp_packet_hdr_t *hdr = odp_packet_hdr(pkt); + + if (offset >= hdr->frame_len) + return -1; + + hdr->l3_offset = offset; + return 0; +} + +uint32_t odp_packet_l3_protocol(odp_packet_t pkt) +{ + odp_packet_hdr_t *hdr = odp_packet_hdr(pkt); + + if (hdr->input_flags.l3) + return hdr->l3_protocol; + else + return -1; +} + +void odp_packet_set_l3_protocol(odp_packet_t pkt, uint16_t protocol) +{ + odp_packet_hdr(pkt)->l3_protocol = protocol; +} + +void *odp_packet_l4_map(odp_packet_t pkt, size_t *seglen) +{ + return odp_packet_offset_map(pkt, odp_packet_l4_offset(pkt), seglen); +} + +size_t odp_packet_l4_offset(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->l4_offset; +} + +int odp_packet_set_l4_offset(odp_packet_t pkt, size_t offset) +{ + odp_packet_hdr_t *hdr = odp_packet_hdr(pkt); + + if (offset >= hdr->frame_len) + return -1; + + hdr->l4_offset = offset; + return 0; +} + +uint32_t odp_packet_l4_protocol(odp_packet_t pkt) +{ + odp_packet_hdr_t *hdr = odp_packet_hdr(pkt); + + if (hdr->input_flags.l4) + return hdr->l4_protocol; + else + return -1; +} + +void odp_packet_set_l4_protocol(odp_packet_t pkt, uint8_t protocol) +{ + odp_packet_hdr(pkt)->l4_protocol = protocol; +} + +void *odp_packet_payload_map(odp_packet_t pkt, size_t *seglen) +{ + return odp_packet_offset_map(pkt, odp_packet_payload_offset(pkt), + seglen); +} + +size_t odp_packet_payload_offset(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->payload_offset; +} + +int odp_packet_set_payload_offset(odp_packet_t pkt, size_t offset) +{ + odp_packet_hdr_t *hdr = odp_packet_hdr(pkt); + + if (offset >= hdr->frame_len) + return -1; + + hdr->payload_offset = offset; + return 0; +} + +int odp_packet_error(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->error_flags.all != 0; +} + +void odp_packet_set_error(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->error_flags.app_error = val; +} + +int odp_packet_inflag_l2(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.l2; +} + +void odp_packet_set_inflag_l2(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.l2 = val; +} + +int odp_packet_inflag_l3(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.l3; +} + +void odp_packet_set_inflag_l3(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.l3 = val; +} + +int odp_packet_inflag_l4(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.l4; +} + +void odp_packet_set_inflag_l4(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.l4 = val; +} - start = (uint8_t *)pkt_hdr + start_offset; - len = ODP_OFFSETOF(odp_packet_hdr_t, buf_data) - start_offset; - memset(start, 0, len); +int odp_packet_inflag_eth(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.eth; +} + +void odp_packet_set_inflag_eth(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.eth = val; +} + +int odp_packet_inflag_jumbo(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.jumbo; +} + +void odp_packet_set_inflag_jumbo(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.jumbo = val; +} + +int odp_packet_inflag_vlan(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.vlan; +} + +void odp_packet_set_inflag_vlan(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.vlan = val; +} + +int odp_packet_inflag_vlan_qinq(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.vlan_qinq; +} + +void odp_packet_set_inflag_vlan_qinq(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.vlan_qinq = val; +} + +int odp_packet_inflag_snap(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.snap; +} + +void odp_packet_set_inflag_snap(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.snap = val; +} + +int odp_packet_inflag_arp(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.arp; +} + +void odp_packet_set_inflag_arp(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.arp = val; +} + +int odp_packet_inflag_ipv4(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.ipv4; +} + +void odp_packet_set_inflag_ipv4(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.ipv4 = val; +} + +int odp_packet_inflag_ipv6(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.ipv6; +} + +void odp_packet_set_inflag_ipv6(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.ipv6 = val; +} + +int odp_packet_inflag_ipfrag(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.ipfrag; +} + +void odp_packet_set_inflag_ipfrag(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.ipfrag = val; +} + +int odp_packet_inflag_ipopt(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.ipopt; +} + +void odp_packet_set_inflag_ipopt(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.ipopt = val; +} + +int odp_packet_inflag_ipsec(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.ipsec; +} + +void odp_packet_set_inflag_ipsec(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.ipsec = val; +} + +int odp_packet_inflag_udp(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.udp; +} + +void odp_packet_set_inflag_udp(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.udp = val; +} + +int odp_packet_inflag_tcp(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.tcp; +} + +void odp_packet_set_inflag_tcp(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.tcp = val; +} + +int odp_packet_inflag_tcpopt(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.tcpopt; +} + +void odp_packet_set_inflag_tcpopt(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.tcpopt = val; +} + +int odp_packet_inflag_icmp(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->input_flags.icmp; +} + +void odp_packet_set_inflag_icmp(odp_packet_t pkt, int val) +{ + odp_packet_hdr(pkt)->input_flags.icmp = val; +} + +int odp_packet_is_valid(odp_packet_t pkt) +{ + odp_buffer_hdr_t *buf = validate_buf((odp_buffer_t)pkt); + + if (buf == NULL) + return 0; + else + return buf->type == ODP_BUFFER_TYPE_PACKET; +} + +int odp_packet_is_segmented(odp_packet_t pkt) +{ + return (odp_packet_hdr(pkt)->buf_hdr.segcount > 1); +} + +int odp_packet_segment_count(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->buf_hdr.segcount; +} + +size_t odp_packet_headroom(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->headroom; +} + +size_t odp_packet_tailroom(odp_packet_t pkt) +{ + return odp_packet_hdr(pkt)->tailroom; +} + +int odp_packet_push_head(odp_packet_t pkt, size_t len) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + if (len > pkt_hdr->headroom) + return -1; + + push_head(pkt_hdr, len); + return 0; +} + +void *odp_packet_push_head_and_map(odp_packet_t pkt, size_t len, size_t *seglen) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + if (len > pkt_hdr->headroom) + return NULL; + + push_head(pkt_hdr, len); + + return buffer_map(&pkt_hdr->buf_hdr, 0, seglen, pkt_hdr->frame_len); +} + +int odp_packet_pull_head(odp_packet_t pkt, size_t len) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + if (len > pkt_hdr->frame_len) + return -1; + + pull_head(pkt_hdr, len); + return 0; +} + +void *odp_packet_pull_head_and_map(odp_packet_t pkt, size_t len, size_t *seglen) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + if (len > pkt_hdr->frame_len) + return NULL; + + pull_head(pkt_hdr, len); + return buffer_map(&pkt_hdr->buf_hdr, 0, seglen, pkt_hdr->frame_len); +} + +int odp_packet_push_tail(odp_packet_t pkt, size_t len) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + if (len > pkt_hdr->tailroom) + return -1; + + push_tail(pkt_hdr, len); + return 0; +} + +void *odp_packet_push_tail_and_map(odp_packet_t pkt, size_t len, size_t *seglen) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + size_t origin = pkt_hdr->frame_len; + + if (len > pkt_hdr->tailroom) + return NULL; + + push_tail(pkt_hdr, len); + + return buffer_map(&pkt_hdr->buf_hdr, origin, seglen, + pkt_hdr->frame_len); +} + +int odp_packet_pull_tail(odp_packet_t pkt, size_t len) +{ + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + if (len > pkt_hdr->frame_len) + return -1; + + pull_tail(pkt_hdr, len); + return 0; +} + +void odp_packet_print(odp_packet_t pkt) +{ + int max_len = 512; + char str[max_len]; + int len = 0; + int n = max_len-1; + odp_packet_hdr_t *hdr = odp_packet_hdr(pkt); + + len += snprintf(&str[len], n-len, "Packet "); + len += odp_buffer_snprint(&str[len], n-len, (odp_buffer_t) pkt); + len += snprintf(&str[len], n-len, + " input_flags 0x%x\n", hdr->input_flags.all); + len += snprintf(&str[len], n-len, + " error_flags 0x%x\n", hdr->error_flags.all); + len += snprintf(&str[len], n-len, + " output_flags 0x%x\n", hdr->output_flags.all); + len += snprintf(&str[len], n-len, + " l2_offset %u\n", hdr->l2_offset); + len += snprintf(&str[len], n-len, + " l3_offset %u\n", hdr->l3_offset); + len += snprintf(&str[len], n-len, + " l3_len %u\n", hdr->l3_len); + len += snprintf(&str[len], n-len, + " l3_protocol 0x%x\n", hdr->l3_protocol); + len += snprintf(&str[len], n-len, + " l4_offset %u\n", hdr->l4_offset); + len += snprintf(&str[len], n-len, + " l4_len %u\n", hdr->l4_len); + len += snprintf(&str[len], n-len, + " l4_protocol %u\n", hdr->l4_protocol); + len += snprintf(&str[len], n-len, + " payload_offset %u\n", hdr->payload_offset); + len += snprintf(&str[len], n-len, + " frame_len %u\n", hdr->frame_len); + str[len] = '\0'; - pkt_hdr->l2_offset = ODP_PACKET_OFFSET_INVALID; - pkt_hdr->l3_offset = ODP_PACKET_OFFSET_INVALID; - pkt_hdr->l4_offset = ODP_PACKET_OFFSET_INVALID; + ODP_LOG(ODP_LOG_DBG, "\n%s\n", str); } -odp_packet_t odp_packet_from_buffer(odp_buffer_t buf) +int odp_packet_copy_to_packet(odp_packet_t dstpkt, size_t dstoffset, + odp_packet_t srcpkt, size_t srcoffset, + size_t len) { - return (odp_packet_t)buf; -} + void *dstmap; + void *srcmap; + size_t cpylen, minseg, dstseglen, srcseglen; + + while (len > 0) { + dstmap = odp_packet_offset_map(dstpkt, dstoffset, &dstseglen); + srcmap = odp_packet_offset_map(srcpkt, srcoffset, &srcseglen); + if (dstmap == NULL || srcmap == NULL) + return -1; + minseg = dstseglen > srcseglen ? srcseglen : dstseglen; + cpylen = len > minseg ? minseg : len; + memcpy(dstmap, srcmap, cpylen); + srcoffset += cpylen; + dstoffset += cpylen; + len -= cpylen; + } -odp_buffer_t odp_packet_to_buffer(odp_packet_t pkt) -{ - return (odp_buffer_t)pkt; + return 0; } -void odp_packet_set_len(odp_packet_t pkt, size_t len) +int odp_packet_copy_to_memory(void *dstmem, odp_packet_t srcpkt, + size_t srcoffset, size_t dstlen) { - odp_packet_hdr(pkt)->frame_len = len; + void *mapaddr; + size_t seglen, cpylen; + uint8_t *dstaddr = (uint8_t *)dstmem; + + while (dstlen > 0) { + mapaddr = odp_packet_offset_map(srcpkt, srcoffset, &seglen); + if (mapaddr == NULL) + return -1; + cpylen = dstlen > seglen ? seglen : dstlen; + memcpy(dstaddr, mapaddr, cpylen); + srcoffset += cpylen; + dstaddr += cpylen; + dstlen -= cpylen; + } + + return 0; } -size_t odp_packet_get_len(odp_packet_t pkt) +int odp_packet_copy_from_memory(odp_packet_t dstpkt, size_t dstoffset, + void *srcmem, size_t srclen) { - return odp_packet_hdr(pkt)->frame_len; + void *mapaddr; + size_t seglen, cpylen; + uint8_t *srcaddr = (uint8_t *)srcmem; + + while (srclen > 0) { + mapaddr = odp_packet_offset_map(dstpkt, dstoffset, &seglen); + if (mapaddr == NULL) + return -1; + cpylen = srclen > seglen ? seglen : srclen; + memcpy(mapaddr, srcaddr, cpylen); + dstoffset += cpylen; + srcaddr += cpylen; + srclen -= cpylen; + } + + return 0; } -uint8_t *odp_packet_addr(odp_packet_t pkt) +odp_packet_t odp_packet_copy(odp_packet_t pkt, odp_buffer_pool_t pool) { - return odp_buffer_addr(odp_packet_to_buffer(pkt)); + size_t pktlen = odp_packet_len(pkt); + const size_t meta_offset = ODP_FIELD_SIZEOF(odp_packet_hdr_t, buf_hdr); + odp_packet_t newpkt = odp_packet_alloc_len(pool, pktlen); + odp_packet_hdr_t *newhdr, *srchdr; + uint8_t *newstart, *srcstart; + + if (newpkt != ODP_PACKET_INVALID) { + /* Must copy meta data first, followed by packet data */ + srchdr = odp_packet_hdr(pkt); + newhdr = odp_packet_hdr(newpkt); + newstart = (uint8_t *)newhdr + meta_offset; + srcstart = (uint8_t *)srchdr + meta_offset; + + memcpy(newstart, srcstart, + sizeof(odp_packet_hdr_t) - meta_offset); + + if (odp_packet_copy_to_packet(newpkt, 0, pkt, 0, pktlen) != 0) { + odp_packet_free(newpkt); + newpkt = ODP_PACKET_INVALID; + } + } + + return newpkt; } -uint8_t *odp_packet_data(odp_packet_t pkt) +odp_packet_t odp_packet_clone(odp_packet_t pkt) { - return odp_packet_addr(pkt) + odp_packet_hdr(pkt)->frame_offset; + return odp_packet_copy(pkt, odp_packet_hdr(pkt)->buf_hdr.pool_hdl); } - -uint8_t *odp_packet_l2(odp_packet_t pkt) +odp_packet_t odp_packet_alloc(odp_buffer_pool_t pool_hdl) { - const size_t offset = odp_packet_l2_offset(pkt); + pool_entry_t *pool = odp_pool_to_entry(pool_hdl); - if (odp_unlikely(offset == ODP_PACKET_OFFSET_INVALID)) - return NULL; + if (pool->s.params.buf_type != ODP_BUFFER_TYPE_PACKET) + return ODP_PACKET_INVALID; - return odp_packet_addr(pkt) + offset; + return (odp_packet_t)buffer_alloc(pool_hdl, + pool->s.params.buf_size); } -size_t odp_packet_l2_offset(odp_packet_t pkt) +odp_packet_t odp_packet_alloc_len(odp_buffer_pool_t pool, size_t len) { - return odp_packet_hdr(pkt)->l2_offset; + if (odp_pool_to_entry(pool)->s.params.buf_type != + ODP_BUFFER_TYPE_PACKET) + return ODP_PACKET_INVALID; + + return (odp_packet_t)buffer_alloc(pool, len); } -void odp_packet_set_l2_offset(odp_packet_t pkt, size_t offset) +void odp_packet_free(odp_packet_t pkt) { - odp_packet_hdr(pkt)->l2_offset = offset; + odp_buffer_free((odp_buffer_t)pkt); } -uint8_t *odp_packet_l3(odp_packet_t pkt) +odp_packet_t odp_packet_split(odp_packet_t pkt, size_t offset, + size_t hr, size_t tr) { - const size_t offset = odp_packet_l3_offset(pkt); + size_t len = odp_packet_len(pkt); + odp_buffer_pool_t pool = odp_packet_pool(pkt); + size_t pool_hr = odp_buffer_pool_headroom(pool); + size_t pkt_tr = odp_packet_tailroom(pkt); + odp_packet_t splitpkt; + size_t splitlen = len - offset; - if (odp_unlikely(offset == ODP_PACKET_OFFSET_INVALID)) - return NULL; + if (offset >= len) + return ODP_PACKET_INVALID; - return odp_packet_addr(pkt) + offset; -} + /* Erratum: We don't handle this rare corner case */ + if (tr > pkt_tr + splitlen) + return ODP_PACKET_INVALID; -size_t odp_packet_l3_offset(odp_packet_t pkt) -{ - return odp_packet_hdr(pkt)->l3_offset; -} + if (hr > pool_hr) + splitlen += (hr - pool_hr); -void odp_packet_set_l3_offset(odp_packet_t pkt, size_t offset) -{ - odp_packet_hdr(pkt)->l3_offset = offset; + splitpkt = odp_packet_alloc_len(pool, splitlen); + + if (splitpkt != ODP_PACKET_INVALID) { + if (hr > pool_hr) { + odp_packet_pull_head(splitpkt, hr - pool_hr); + splitlen -= (hr - pool_hr); + } + odp_packet_copy_to_packet(splitpkt, 0, + pkt, offset, splitlen); + odp_packet_pull_tail(pkt, splitlen); + } + + return splitpkt; } -uint8_t *odp_packet_l4(odp_packet_t pkt) +odp_packet_t odp_packet_join(odp_packet_t pkt1, odp_packet_t pkt2) { - const size_t offset = odp_packet_l4_offset(pkt); + size_t len1 = odp_packet_len(pkt1); + size_t len2 = odp_packet_len(pkt2); + odp_packet_t joinpkt; + void *udata1, *udata_join; + size_t udata_size1, udata_size_join; + + /* Optimize if pkt1 is big enough to hold pkt2 */ + if (odp_packet_push_tail(pkt1, len2) == 0) { + odp_packet_copy_to_packet(pkt1, len1, + pkt2, 0, len2); + odp_packet_free(pkt2); + return pkt1; + } - if (odp_unlikely(offset == ODP_PACKET_OFFSET_INVALID)) - return NULL; + /* Otherwise join into a new packet */ + joinpkt = odp_packet_alloc_len(odp_packet_pool(pkt1), + len1 + len2); - return odp_packet_addr(pkt) + offset; + if (joinpkt != ODP_PACKET_INVALID) { + odp_packet_copy_to_packet(joinpkt, 0, pkt1, 0, len1); + odp_packet_copy_to_packet(joinpkt, len1, pkt2, 0, len2); + + /* Copy user metadata if present */ + udata1 = odp_packet_udata(pkt1, &udata_size1); + udata_join = odp_packet_udata(joinpkt, &udata_size_join); + + if (udata1 != NULL && udata_join != NULL) + memcpy(udata_join, udata1, + udata_size_join > udata_size1 ? + udata_size1 : udata_size_join); + + odp_packet_free(pkt1); + odp_packet_free(pkt2); + } + + return joinpkt; } -size_t odp_packet_l4_offset(odp_packet_t pkt) +uint32_t odp_packet_refcount(odp_packet_t pkt) { - return odp_packet_hdr(pkt)->l4_offset; + return odp_buffer_refcount(&odp_packet_hdr(pkt)->buf_hdr); } -void odp_packet_set_l4_offset(odp_packet_t pkt, size_t offset) +uint32_t odp_packet_incr_refcount(odp_packet_t pkt, uint32_t val) { - odp_packet_hdr(pkt)->l4_offset = offset; + return odp_buffer_incr_refcount(&odp_packet_hdr(pkt)->buf_hdr, val); } +uint32_t odp_packet_decr_refcount(odp_packet_t pkt, uint32_t val) +{ + return odp_buffer_decr_refcount(&odp_packet_hdr(pkt)->buf_hdr, val); +} -int odp_packet_is_segmented(odp_packet_t pkt) +/** + * Parser helper function for IPv4 + */ +static inline uint8_t parse_ipv4(odp_packet_hdr_t *pkt_hdr, + uint8_t **parseptr, size_t *offset) { - odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)pkt); + odph_ipv4hdr_t *ipv4 = (odph_ipv4hdr_t *)*parseptr; + uint8_t ver = ODPH_IPV4HDR_VER(ipv4->ver_ihl); + uint8_t ihl = ODPH_IPV4HDR_IHL(ipv4->ver_ihl); + uint16_t frag_offset; - if (buf_hdr->scatter.num_bufs == 0) + pkt_hdr->l3_len = odp_be_to_cpu_16(ipv4->tot_len); + + if (odp_unlikely(ihl < ODPH_IPV4HDR_IHL_MIN) || + odp_unlikely(ver != 4) || + (pkt_hdr->l3_len > pkt_hdr->frame_len - *offset)) { + pkt_hdr->error_flags.ip_err = 1; return 0; - else - return 1; + } + + *offset += ihl * 4; + *parseptr += ihl * 4; + + if (odp_unlikely(ihl > ODPH_IPV4HDR_IHL_MIN)) + pkt_hdr->input_flags.ipopt = 1; + + /* A packet is a fragment if: + * "more fragments" flag is set (all fragments except the last) + * OR + * "fragment offset" field is nonzero (all fragments except the first) + */ + frag_offset = odp_be_to_cpu_16(ipv4->frag_offset); + if (odp_unlikely(ODPH_IPV4HDR_IS_FRAGMENT(frag_offset))) + pkt_hdr->input_flags.ipfrag = 1; + + if (ipv4->proto == ODPH_IPPROTO_ESP || + ipv4->proto == ODPH_IPPROTO_AH) { + pkt_hdr->input_flags.ipsec = 1; + return 0; + } + + /* Set pkt_hdr->input_flags.ipopt when checking L4 hdrs after return */ + + *offset = sizeof(uint32_t) * ihl; + return ipv4->proto; } +/** + * Parser helper function for IPv6 + */ +static inline uint8_t parse_ipv6(odp_packet_hdr_t *pkt_hdr, + uint8_t **parseptr, size_t *offset) +{ + odph_ipv6hdr_t *ipv6 = (odph_ipv6hdr_t *)*parseptr; + odph_ipv6hdr_ext_t *ipv6ext; + + pkt_hdr->l3_len = odp_be_to_cpu_16(ipv6->payload_len); + + /* Basic sanity checks on IPv6 header */ + if (ipv6->ver != 6 || + pkt_hdr->l3_len > pkt_hdr->frame_len - *offset) { + pkt_hdr->error_flags.ip_err = 1; + return 0; + } + + /* Skip past IPv6 header */ + *offset += sizeof(odph_ipv6hdr_t); + *parseptr += sizeof(odph_ipv6hdr_t); + + + /* Skip past any IPv6 extension headers */ + if (ipv6->next_hdr == ODPH_IPPROTO_HOPOPTS || + ipv6->next_hdr == ODPH_IPPROTO_ROUTE) { + pkt_hdr->input_flags.ipopt = 1; + + do { + ipv6ext = (odph_ipv6hdr_ext_t *)*parseptr; + uint16_t extlen = 8 + ipv6ext->ext_len * 8; + + *offset += extlen; + *parseptr += extlen; + } while ((ipv6ext->next_hdr == ODPH_IPPROTO_HOPOPTS || + ipv6ext->next_hdr == ODPH_IPPROTO_ROUTE) && + *offset < pkt_hdr->frame_len); + + if (*offset >= pkt_hdr->l3_offset + ipv6->payload_len) { + pkt_hdr->error_flags.ip_err = 1; + return 0; + } + + if (ipv6ext->next_hdr == ODPH_IPPROTO_FRAG) + pkt_hdr->input_flags.ipfrag = 1; + + return ipv6ext->next_hdr; + } + + if (odp_unlikely(ipv6->next_hdr == ODPH_IPPROTO_FRAG)) { + pkt_hdr->input_flags.ipopt = 1; + pkt_hdr->input_flags.ipfrag = 1; + } + + return ipv6->next_hdr; +} -int odp_packet_seg_count(odp_packet_t pkt) +/** + * Parser helper function for TCP + */ +static inline void parse_tcp(odp_packet_hdr_t *pkt_hdr, + uint8_t **parseptr, size_t *offset) { - odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)pkt); + odph_tcphdr_t *tcp = (odph_tcphdr_t *)*parseptr; + + if (tcp->hl < sizeof(odph_tcphdr_t)/sizeof(uint32_t)) + pkt_hdr->error_flags.tcp_err = 1; + else if ((uint32_t)tcp->hl * 4 > sizeof(odph_tcphdr_t)) + pkt_hdr->input_flags.tcpopt = 1; - return (int)buf_hdr->scatter.num_bufs + 1; + pkt_hdr->l4_len = pkt_hdr->l3_len + + pkt_hdr->l3_offset - pkt_hdr->l4_offset; + + *offset += sizeof(odph_tcphdr_t); + *parseptr += sizeof(odph_tcphdr_t); } +/** + * Parser helper function for UDP + */ +static inline void parse_udp(odp_packet_hdr_t *pkt_hdr, + uint8_t **parseptr, size_t *offset) +{ + odph_udphdr_t *udp = (odph_udphdr_t *)*parseptr; + uint32_t udplen = odp_be_to_cpu_16(udp->length); + + if (udplen < sizeof(odph_udphdr_t) || + udplen > (pkt_hdr->l3_len + + pkt_hdr->l3_offset - pkt_hdr->l4_offset)) { + pkt_hdr->error_flags.udp_err = 1; + } + + pkt_hdr->l4_len = udplen; + + *offset += sizeof(odph_udphdr_t); + *parseptr += sizeof(odph_udphdr_t); +} /** * Simple packet parser: eth, VLAN, IP, TCP/UDP/ICMP @@ -154,245 +940,151 @@ int odp_packet_seg_count(odp_packet_t pkt) * , lengths and offsets (usually done&called in packet input). * * @param pkt Packet handle - * @param len Packet length in bytes - * @param frame_offset Byte offset to L2 header */ -void odp_packet_parse(odp_packet_t pkt, size_t len, size_t frame_offset) +int odp_packet_parse(odp_packet_t pkt) { odp_packet_hdr_t *const pkt_hdr = odp_packet_hdr(pkt); odph_ethhdr_t *eth; odph_vlanhdr_t *vlan; - odph_ipv4hdr_t *ipv4; - odph_ipv6hdr_t *ipv6; uint16_t ethtype; - size_t offset = 0; + uint8_t *parseptr; + size_t offset, seglen; uint8_t ip_proto = 0; + /* Reset parser metadata for new parse */ + pkt_hdr->error_flags.all = 0; + pkt_hdr->input_flags.all = 0; + pkt_hdr->output_flags.all = 0; + pkt_hdr->l2_offset = 0; + pkt_hdr->l3_offset = 0; + pkt_hdr->l4_offset = 0; + pkt_hdr->payload_offset = 0; + pkt_hdr->vlan_s_tag = 0; + pkt_hdr->vlan_c_tag = 0; + pkt_hdr->l3_protocol = 0; + pkt_hdr->l4_protocol = 0; + + /* We only support Ethernet for now */ pkt_hdr->input_flags.eth = 1; - pkt_hdr->frame_offset = frame_offset; - pkt_hdr->frame_len = len; - if (odp_unlikely(len < ODPH_ETH_LEN_MIN)) { + if (odp_unlikely(pkt_hdr->frame_len < ODPH_ETH_LEN_MIN)) { pkt_hdr->error_flags.frame_len = 1; - return; - } else if (len > ODPH_ETH_LEN_MAX) { + goto parse_exit; + } else if (pkt_hdr->frame_len > ODPH_ETH_LEN_MAX) { pkt_hdr->input_flags.jumbo = 1; } /* Assume valid L2 header, no CRC/FCS check in SW */ pkt_hdr->input_flags.l2 = 1; - pkt_hdr->l2_offset = frame_offset; - eth = (odph_ethhdr_t *)odp_packet_data(pkt); - ethtype = odp_be_to_cpu_16(eth->type); - vlan = (odph_vlanhdr_t *)ð->type; + eth = (odph_ethhdr_t *)odp_packet_map(pkt, &seglen); + offset = sizeof(odph_ethhdr_t); + parseptr = (uint8_t *)ð->type; + ethtype = odp_be_to_cpu_16(*((uint16_t *)(void *)parseptr)); + /* Parse the VLAN header(s), if present */ if (ethtype == ODPH_ETHTYPE_VLAN_OUTER) { pkt_hdr->input_flags.vlan_qinq = 1; - ethtype = odp_be_to_cpu_16(vlan->tpid); + pkt_hdr->input_flags.vlan = 1; + vlan = (odph_vlanhdr_t *)(void *)parseptr; + pkt_hdr->vlan_s_tag = ((ethtype << 16) | + odp_be_to_cpu_16(vlan->tci)); offset += sizeof(odph_vlanhdr_t); - vlan = &vlan[1]; + parseptr += sizeof(odph_vlanhdr_t); + ethtype = odp_be_to_cpu_16(*((uint16_t *)(void *)parseptr)); } if (ethtype == ODPH_ETHTYPE_VLAN) { pkt_hdr->input_flags.vlan = 1; - ethtype = odp_be_to_cpu_16(vlan->tpid); + vlan = (odph_vlanhdr_t *)(void *)parseptr; + pkt_hdr->vlan_c_tag = ((ethtype << 16) | + odp_be_to_cpu_16(vlan->tci)); offset += sizeof(odph_vlanhdr_t); + parseptr += sizeof(odph_vlanhdr_t); + ethtype = odp_be_to_cpu_16(*((uint16_t *)(void *)parseptr)); } + /* Check for SNAP vs. DIX */ + if (ethtype < ODPH_ETH_LEN_MAX) { + pkt_hdr->input_flags.snap = 1; + if (ethtype > pkt_hdr->frame_len - offset) { + pkt_hdr->error_flags.snap_len = 1; + goto parse_exit; + } + offset += 8; + parseptr += 8; + ethtype = odp_be_to_cpu_16(*((uint16_t *)(void *)parseptr)); + } + + /* Consume Ethertype for Layer 3 parse */ + parseptr += 2; + /* Set l3_offset+flag only for known ethtypes */ + pkt_hdr->input_flags.l3 = 1; + pkt_hdr->l3_offset = offset; + pkt_hdr->l3_protocol = ethtype; + + /* Parse Layer 3 headers */ switch (ethtype) { case ODPH_ETHTYPE_IPV4: pkt_hdr->input_flags.ipv4 = 1; - pkt_hdr->input_flags.l3 = 1; - pkt_hdr->l3_offset = frame_offset + ODPH_ETHHDR_LEN + offset; - ipv4 = (odph_ipv4hdr_t *)odp_packet_l3(pkt); - ip_proto = parse_ipv4(pkt_hdr, ipv4, &offset); + ip_proto = parse_ipv4(pkt_hdr, &parseptr, &offset); break; + case ODPH_ETHTYPE_IPV6: pkt_hdr->input_flags.ipv6 = 1; - pkt_hdr->input_flags.l3 = 1; - pkt_hdr->l3_offset = frame_offset + ODPH_ETHHDR_LEN + offset; - ipv6 = (odph_ipv6hdr_t *)odp_packet_l3(pkt); - ip_proto = parse_ipv6(pkt_hdr, ipv6, &offset); + ip_proto = parse_ipv6(pkt_hdr, &parseptr, &offset); break; + case ODPH_ETHTYPE_ARP: pkt_hdr->input_flags.arp = 1; - /* fall through */ - default: - ip_proto = 0; + ip_proto = 255; /* Reserved invalid by IANA */ break; + + default: + pkt_hdr->input_flags.l3 = 0; + ip_proto = 255; /* Reserved invalid by IANA */ } + /* Set l4_offset+flag only for known ip_proto */ + pkt_hdr->input_flags.l4 = 1; + pkt_hdr->l4_offset = offset; + pkt_hdr->l4_protocol = ip_proto; + + /* Parse Layer 4 headers */ switch (ip_proto) { - case ODPH_IPPROTO_UDP: - pkt_hdr->input_flags.udp = 1; - pkt_hdr->input_flags.l4 = 1; - pkt_hdr->l4_offset = pkt_hdr->l3_offset + offset; + case ODPH_IPPROTO_ICMP: + pkt_hdr->input_flags.icmp = 1; break; + case ODPH_IPPROTO_TCP: pkt_hdr->input_flags.tcp = 1; - pkt_hdr->input_flags.l4 = 1; - pkt_hdr->l4_offset = pkt_hdr->l3_offset + offset; + parse_tcp(pkt_hdr, &parseptr, &offset); break; - case ODPH_IPPROTO_SCTP: - pkt_hdr->input_flags.sctp = 1; - pkt_hdr->input_flags.l4 = 1; - pkt_hdr->l4_offset = pkt_hdr->l3_offset + offset; + + case ODPH_IPPROTO_UDP: + pkt_hdr->input_flags.udp = 1; + parse_udp(pkt_hdr, &parseptr, &offset); break; - case ODPH_IPPROTO_ICMP: - pkt_hdr->input_flags.icmp = 1; - pkt_hdr->input_flags.l4 = 1; - pkt_hdr->l4_offset = pkt_hdr->l3_offset + offset; + + case ODPH_IPPROTO_AH: + case ODPH_IPPROTO_ESP: + pkt_hdr->input_flags.ipsec = 1; break; + default: - /* 0 or unhandled IP protocols, don't set L4 flag+offset */ - if (pkt_hdr->input_flags.ipv6) { - /* IPv6 next_hdr is not L4, mark as IP-option instead */ - pkt_hdr->input_flags.ipopt = 1; - } + pkt_hdr->input_flags.l4 = 0; break; } -} - -static inline uint8_t parse_ipv4(odp_packet_hdr_t *pkt_hdr, - odph_ipv4hdr_t *ipv4, size_t *offset_out) -{ - uint8_t ihl; - uint16_t frag_offset; - - ihl = ODPH_IPV4HDR_IHL(ipv4->ver_ihl); - if (odp_unlikely(ihl < ODPH_IPV4HDR_IHL_MIN)) { - pkt_hdr->error_flags.ip_err = 1; - return 0; - } - - if (odp_unlikely(ihl > ODPH_IPV4HDR_IHL_MIN)) { - pkt_hdr->input_flags.ipopt = 1; - return 0; - } - /* A packet is a fragment if: - * "more fragments" flag is set (all fragments except the last) - * OR - * "fragment offset" field is nonzero (all fragments except the first) + /* + * Anything beyond what we parse here is considered payload. + * Note: Payload is really only relevant for TCP and UDP. For + * all other protocols, the payload offset will point to the + * final header (ARP, ICMP, AH, ESP, or IP Fragment). */ - frag_offset = odp_be_to_cpu_16(ipv4->frag_offset); - if (odp_unlikely(ODPH_IPV4HDR_IS_FRAGMENT(frag_offset))) { - pkt_hdr->input_flags.ipfrag = 1; - return 0; - } - - if (ipv4->proto == ODPH_IPPROTO_ESP || - ipv4->proto == ODPH_IPPROTO_AH) { - pkt_hdr->input_flags.ipsec = 1; - return 0; - } - - /* Set pkt_hdr->input_flags.ipopt when checking L4 hdrs after return */ - - *offset_out = sizeof(uint32_t) * ihl; - return ipv4->proto; -} - -static inline uint8_t parse_ipv6(odp_packet_hdr_t *pkt_hdr, - odph_ipv6hdr_t *ipv6, size_t *offset_out) -{ - if (ipv6->next_hdr == ODPH_IPPROTO_ESP || - ipv6->next_hdr == ODPH_IPPROTO_AH) { - pkt_hdr->input_flags.ipopt = 1; - pkt_hdr->input_flags.ipsec = 1; - return 0; - } - - if (odp_unlikely(ipv6->next_hdr == ODPH_IPPROTO_FRAG)) { - pkt_hdr->input_flags.ipopt = 1; - pkt_hdr->input_flags.ipfrag = 1; - return 0; - } - - /* Don't step through more extensions */ - *offset_out = ODPH_IPV6HDR_LEN; - return ipv6->next_hdr; -} - -void odp_packet_print(odp_packet_t pkt) -{ - int max_len = 512; - char str[max_len]; - int len = 0; - int n = max_len-1; - odp_packet_hdr_t *hdr = odp_packet_hdr(pkt); - - len += snprintf(&str[len], n-len, "Packet "); - len += odp_buffer_snprint(&str[len], n-len, (odp_buffer_t) pkt); - len += snprintf(&str[len], n-len, - " input_flags 0x%x\n", hdr->input_flags.all); - len += snprintf(&str[len], n-len, - " error_flags 0x%x\n", hdr->error_flags.all); - len += snprintf(&str[len], n-len, - " output_flags 0x%x\n", hdr->output_flags.all); - len += snprintf(&str[len], n-len, - " frame_offset %u\n", hdr->frame_offset); - len += snprintf(&str[len], n-len, - " l2_offset %u\n", hdr->l2_offset); - len += snprintf(&str[len], n-len, - " l3_offset %u\n", hdr->l3_offset); - len += snprintf(&str[len], n-len, - " l4_offset %u\n", hdr->l4_offset); - len += snprintf(&str[len], n-len, - " frame_len %u\n", hdr->frame_len); - len += snprintf(&str[len], n-len, - " input %u\n", hdr->input); - str[len] = '\0'; - - printf("\n%s\n", str); -} - -int odp_packet_copy(odp_packet_t pkt_dst, odp_packet_t pkt_src) -{ - odp_packet_hdr_t *const pkt_hdr_dst = odp_packet_hdr(pkt_dst); - odp_packet_hdr_t *const pkt_hdr_src = odp_packet_hdr(pkt_src); - const size_t start_offset = ODP_FIELD_SIZEOF(odp_packet_hdr_t, buf_hdr); - uint8_t *start_src; - uint8_t *start_dst; - size_t len; - - if (pkt_dst == ODP_PACKET_INVALID || pkt_src == ODP_PACKET_INVALID) - return -1; - - if (pkt_hdr_dst->buf_hdr.size < - pkt_hdr_src->frame_len + pkt_hdr_src->frame_offset) - return -1; - - /* Copy packet header */ - start_dst = (uint8_t *)pkt_hdr_dst + start_offset; - start_src = (uint8_t *)pkt_hdr_src + start_offset; - len = ODP_OFFSETOF(odp_packet_hdr_t, buf_data) - start_offset; - memcpy(start_dst, start_src, len); - - /* Copy frame payload */ - start_dst = (uint8_t *)odp_packet_data(pkt_dst); - start_src = (uint8_t *)odp_packet_data(pkt_src); - len = pkt_hdr_src->frame_len; - memcpy(start_dst, start_src, len); - - /* Copy useful things from the buffer header */ - pkt_hdr_dst->buf_hdr.cur_offset = pkt_hdr_src->buf_hdr.cur_offset; + pkt_hdr->payload_offset = offset; - /* Create a copy of the scatter list */ - odp_buffer_copy_scatter(odp_packet_to_buffer(pkt_dst), - odp_packet_to_buffer(pkt_src)); - - return 0; -} - -void odp_packet_set_ctx(odp_packet_t pkt, const void *ctx) -{ - odp_packet_hdr(pkt)->user_ctx = (intptr_t)ctx; -} - -void *odp_packet_get_ctx(odp_packet_t pkt) -{ - return (void *)(intptr_t)odp_packet_hdr(pkt)->user_ctx; +parse_exit: + return pkt_hdr->error_flags.all != 0; }