Message ID | 1417547823-16522-1-git-send-email-bill.fischofer@linaro.org |
---|---|
State | New |
Headers | show |
prefix this patch with: api: ... On 2014-12-02 13:17, Bill Fischofer wrote: > Restructure ODP buffer pool internals to support new APIs. The comment doesn't add any extra value from the short log. "Modifys linux-generic, example and test to make them ready for adding the new odp_buffer_pool_create API" > Implements new odp_buffer_pool_create() API. > > Signed-off-by: Bill Fischofer <bill.fischofer@linaro.org> > --- > example/generator/odp_generator.c | 19 +- > example/ipsec/odp_ipsec.c | 57 +- > example/l2fwd/odp_l2fwd.c | 19 +- > example/odp_example/odp_example.c | 18 +- > example/packet/odp_pktio.c | 19 +- > example/timer/odp_timer_test.c | 13 +- > .../linux-generic/include/api/odp_buffer_pool.h | 91 ++- > platform/linux-generic/include/api/odp_config.h | 10 + > .../linux-generic/include/api/odp_platform_types.h | 9 + Group stuff into odp_platform_types.h should be its own patch. > .../linux-generic/include/api/odp_shared_memory.h | 10 +- > .../linux-generic/include/odp_buffer_inlines.h | 157 +++++ Creating an inline file should be its own patch. > .../linux-generic/include/odp_buffer_internal.h | 137 ++-- > .../include/odp_buffer_pool_internal.h | 278 ++++++-- > .../linux-generic/include/odp_packet_internal.h | 50 +- > .../linux-generic/include/odp_timer_internal.h | 11 +- > platform/linux-generic/odp_buffer.c | 31 +- > platform/linux-generic/odp_buffer_pool.c | 711 +++++++++------------ > platform/linux-generic/odp_packet.c | 41 +- > platform/linux-generic/odp_queue.c | 1 + > platform/linux-generic/odp_schedule.c | 20 +- > platform/linux-generic/odp_timer.c | 3 +- > test/api_test/odp_timer_ping.c | 19 +- > test/validation/odp_crypto.c | 43 +- > test/validation/odp_queue.c | 19 +- > 24 files changed, 1024 insertions(+), 762 deletions(-) > create mode 100644 platform/linux-generic/include/odp_buffer_inlines.h > [...] > diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h b/platform/linux-generic/include/api/odp_buffer_pool.h > index 30b83e0..7022daa 100644 > --- a/platform/linux-generic/include/api/odp_buffer_pool.h > +++ b/platform/linux-generic/include/api/odp_buffer_pool.h > @@ -36,32 +36,101 @@ extern "C" { > #define ODP_BUFFER_POOL_INVALID 0 > > /** > + * Buffer pool parameters > + * Used to communicate buffer pool creation options. > + */ > +typedef struct odp_buffer_pool_param_t { > + size_t buf_size; /**< Buffer size in bytes. The maximum > + number of bytes application will "...bytes the application..." > + store in each buffer. */ > + size_t buf_align; /**< Minimum buffer alignment in bytes. > + Valid values are powers of two. Use 0 > + for default alignment. Default will > + always be a multiple of 8. */ > + uint32_t num_bufs; /**< Number of buffers in the pool */ > + int buf_type; /**< Buffer type */ > +} odp_buffer_pool_param_t; > + > +/** > * Create a buffer pool > + * This routine is used to create a buffer pool. It take three > + * arguments: the optional name of the pool to be created, an optional shared > + * memory handle, and a parameter struct that describes the pool to be > + * created. If a name is not specified the result is an anonymous pool that > + * cannot be referenced by odp_buffer_pool_lookup(). > * > - * @param name Name of the pool (max ODP_BUFFER_POOL_NAME_LEN - 1 chars) > - * @param base_addr Pool base address > - * @param size Pool size in bytes > - * @param buf_size Buffer size in bytes > - * @param buf_align Minimum buffer alignment > - * @param buf_type Buffer type > + * @param[in] name Name of the pool, max ODP_BUFFER_POOL_NAME_LEN-1 chars. > + * May be specified as NULL for anonymous pools. > * > - * @return Buffer pool handle > + * @param[in] shm The shared memory object in which to create the pool. > + * Use ODP_SHM_NULL to reserve default memory type > + * for the buffer type. > + * > + * @param[in] params Buffer pool parameters. > + * > + * @return Buffer pool handle or ODP_BUFFER_POOL_INVALID if call failed. Should be @retval Buffer pool handle on success @retval ODP_BUFFER_POOL_INVALID if call failed 1 (if it can fail list the reasons) @retval ODP_BUFFER_POOL_INVALID if call failed 2 (if it can fail list the reasons) @retval ODP_BUFFER_POOL_INVALID if call failed N > */ > + > odp_buffer_pool_t odp_buffer_pool_create(const char *name, > - void *base_addr, uint64_t size, > - size_t buf_size, size_t buf_align, > - int buf_type); > + odp_shm_t shm, > + odp_buffer_pool_param_t *params); > > +/** > + * Destroy a buffer pool previously created by odp_buffer_pool_create() > + * > + * @param[in] pool Handle of the buffer pool to be destroyed > + * > + * @return 0 on Success, -1 on Failure. use @retval here as well and list the reasons how it can fail. > + * > + * @note This routine destroys a previously created buffer pool. This call > + * does not destroy any shared memory object passed to > + * odp_buffer_pool_create() used to store the buffer pool contents. The caller > + * takes responsibility for that. If no shared memory object was passed as > + * part of the create call, then this routine will destroy any internal shared > + * memory objects associated with the buffer pool. Results are undefined if > + * an attempt is made to destroy a buffer pool that contains allocated or > + * otherwise active buffers. > + */ > +int odp_buffer_pool_destroy(odp_buffer_pool_t pool); This doesn't belong in this patch, belongs in the odp_buffer_pool_destroy patch. > > /** > * Find a buffer pool by name > * > - * @param name Name of the pool > + * @param[in] name Name of the pool > * > * @return Buffer pool handle, or ODP_BUFFER_POOL_INVALID if not found. Fix this. > + * > + * @note This routine cannot be used to look up an anonymous pool (one created > + * with no name). How can I delete an anonymous pool? > */ > odp_buffer_pool_t odp_buffer_pool_lookup(const char *name); > > +/** > + * Buffer pool information struct > + * Used to get information about a buffer pool. > + */ > +typedef struct odp_buffer_pool_info_t { > + const char *name; /**< pool name */ > + odp_buffer_pool_param_t params; /**< pool parameters */ > +} odp_buffer_pool_info_t; > + > +/** > + * Retrieve information about a buffer pool > + * > + * @param[in] pool Buffer pool handle > + * > + * @param[out] shm Recieves odp_shm_t supplied by caller at > + * pool creation, or ODP_SHM_NULL if the > + * pool is managed internally. > + * > + * @param[out] info Receives an odp_buffer_pool_info_t object > + * that describes the pool. > + * > + * @return 0 on success, -1 if info could not be retrieved. Fix > + */ > + > +int odp_buffer_pool_info(odp_buffer_pool_t pool, odp_shm_t *shm, > + odp_buffer_pool_info_t *info); This doesn't belong in this patch, belongs in the odp_buffer_pool_info patch. > > /** > * Print buffer pool info > diff --git a/platform/linux-generic/include/api/odp_config.h b/platform/linux-generic/include/api/odp_config.h > index 906897c..1226d37 100644 > --- a/platform/linux-generic/include/api/odp_config.h > +++ b/platform/linux-generic/include/api/odp_config.h > @@ -49,6 +49,16 @@ extern "C" { > #define ODP_CONFIG_PKTIO_ENTRIES 64 > > /** > + * Segment size to use - What does "-" mean? Can you elaborate more on this? > + */ > +#define ODP_CONFIG_BUF_SEG_SIZE (512*3) > + > +/** > + * Maximum buffer size supported > + */ > +#define ODP_CONFIG_BUF_MAX_SIZE (ODP_CONFIG_BUF_SEG_SIZE*7) Isn't this platform specific? > + > +/** > * @} > */ > > diff --git a/platform/linux-generic/include/api/odp_platform_types.h b/platform/linux-generic/include/api/odp_platform_types.h > index 4db47d3..b9b3aea 100644 > --- a/platform/linux-generic/include/api/odp_platform_types.h > +++ b/platform/linux-generic/include/api/odp_platform_types.h > @@ -65,6 +65,15 @@ typedef uint32_t odp_pktio_t; > #define ODP_PKTIO_ANY ((odp_pktio_t)~0) > > /** > + * ODP shared memory block > + */ > +typedef uint32_t odp_shm_t; > + > +/** Invalid shared memory block */ > +#define ODP_SHM_INVALID 0 > +#define ODP_SHM_NULL ODP_SHM_INVALID /**< Synonym for buffer pool use */ ODP_SHM_* touches shm functionality and should be in its own patch to fix/move it. > + > +/** > * @} > */ > > diff --git a/platform/linux-generic/include/api/odp_shared_memory.h b/platform/linux-generic/include/api/odp_shared_memory.h > index 26e208b..f70db5a 100644 > --- a/platform/linux-generic/include/api/odp_shared_memory.h > +++ b/platform/linux-generic/include/api/odp_shared_memory.h > @@ -20,6 +20,7 @@ extern "C" { > > > #include <odp_std_types.h> > +#include <odp_platform_types.h> Not relevant for the odp_buffer_pool_create > > /** @defgroup odp_shared_memory ODP SHARED MEMORY > * Operations on shared memory. > @@ -38,15 +39,6 @@ extern "C" { > #define ODP_SHM_PROC 0x2 /**< Share with external processes */ > > /** > - * ODP shared memory block > - */ > -typedef uint32_t odp_shm_t; > - > -/** Invalid shared memory block */ > -#define ODP_SHM_INVALID 0 > - > - > -/** > * Shared memory block info > */ > typedef struct odp_shm_info_t { > diff --git a/platform/linux-generic/include/odp_buffer_inlines.h b/platform/linux-generic/include/odp_buffer_inlines.h > new file mode 100644 > index 0000000..f33b41d > --- /dev/null > +++ b/platform/linux-generic/include/odp_buffer_inlines.h > @@ -0,0 +1,157 @@ > +/* Copyright (c) 2014, Linaro Limited > + * All rights reserved. > + * > + * SPDX-License-Identifier: BSD-3-Clause > + */ > + > +/** > + * @file > + * > + * Inline functions for ODP buffer mgmt routines - implementation internal > + */ > + > +#ifndef ODP_BUFFER_INLINES_H_ > +#define ODP_BUFFER_INLINES_H_ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +static inline odp_buffer_t odp_buffer_encode_handle(odp_buffer_hdr_t *hdr) > +{ > + odp_buffer_bits_t handle; > + uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl); > + struct pool_entry_s *pool = get_pool_entry(pool_id); > + > + handle.pool_id = pool_id; > + handle.index = ((uint8_t *)hdr - pool->pool_base_addr) / > + ODP_CACHE_LINE_SIZE; > + handle.seg = 0; > + > + return handle.u32; > +} > + > +static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) > +{ > + odp_buffer_t hdl = odp_buffer_encode_handle(hdr); > + if (hdl != hdr->handle.handle) { > + ODP_DBG("buf %p should have handle %x but is cached as %x\n", > + hdr, hdl, hdr->handle.handle); > + hdr->handle.handle = hdl; > + } > + return hdr->handle.handle; > +} > + > +static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > +{ > + odp_buffer_bits_t handle; > + uint32_t pool_id; > + uint32_t index; > + struct pool_entry_s *pool; > + > + handle.u32 = buf; > + pool_id = handle.pool_id; > + index = handle.index; > + > +#ifdef POOL_ERROR_CHECK > + if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { > + ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); > + return NULL; > + } > +#endif > + > + pool = get_pool_entry(pool_id); > + > +#ifdef POOL_ERROR_CHECK > + if (odp_unlikely(index > pool->params.num_bufs - 1)) { > + ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); > + return NULL; > + } > +#endif > + > + return (odp_buffer_hdr_t *)(void *) > + (pool->pool_base_addr + (index * ODP_CACHE_LINE_SIZE)); > +} > + > +static inline uint32_t odp_buffer_refcount(odp_buffer_hdr_t *buf) > +{ > + return odp_atomic_load_u32(&buf->ref_count); > +} > + > +static inline uint32_t odp_buffer_incr_refcount(odp_buffer_hdr_t *buf, > + uint32_t val) > +{ > + return odp_atomic_fetch_add_u32(&buf->ref_count, val) + val; > +} > + > +static inline uint32_t odp_buffer_decr_refcount(odp_buffer_hdr_t *buf, > + uint32_t val) > +{ > + uint32_t tmp; > + > + tmp = odp_atomic_fetch_sub_u32(&buf->ref_count, val); > + > + if (tmp < val) { > + odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); > + return 0; > + } else { drop the else statement > + return tmp - val; > + } > +} > + > +static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) > +{ > + odp_buffer_bits_t handle; > + odp_buffer_hdr_t *buf_hdr; > + handle.u32 = buf; > + > + /* For buffer handles, segment index must be 0 */ Why does the buffer handle always have to have a segment index that must be 0? > + if (handle.seg != 0) > + return NULL; Why do we need to check everything? shouldn't we trust our internal stuff to be sent correctly? Maybe it should be an ODP_ASSERT? > + > + pool_entry_t *pool = odp_pool_to_entry(handle.pool_id); > + > + /* If pool not created, handle is invalid */ > + if (pool->s.pool_shm == ODP_SHM_INVALID) > + return NULL; The same applies here. > + > + uint32_t buf_stride = pool->s.buf_stride / ODP_CACHE_LINE_SIZE; > + > + /* A valid buffer index must be on stride, and must be in range */ > + if ((handle.index % buf_stride != 0) || > + ((uint32_t)(handle.index / buf_stride) >= pool->s.params.num_bufs)) > + return NULL; > + > + buf_hdr = (odp_buffer_hdr_t *)(void *) > + (pool->s.pool_base_addr + > + (handle.index * ODP_CACHE_LINE_SIZE)); > + > + /* Handle is valid, so buffer is valid if it is allocated */ > + if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) > + return NULL; > + else Drop the else > + return buf_hdr; > +} > + > +int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > + > +static inline void *buffer_map(odp_buffer_hdr_t *buf, > + size_t offset, > + size_t *seglen, > + size_t limit) > +{ > + int seg_index = offset / buf->segsize; > + int seg_offset = offset % buf->segsize; > + size_t buf_left = limit - offset; > + > + *seglen = buf_left < buf->segsize ? > + buf_left : buf->segsize - seg_offset; > + > + return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]); > +} > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif > diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h > index 0027bfc..29666db 100644 > --- a/platform/linux-generic/include/odp_buffer_internal.h > +++ b/platform/linux-generic/include/odp_buffer_internal.h > @@ -24,99 +24,118 @@ extern "C" { > #include <odp_buffer.h> > #include <odp_debug.h> > #include <odp_align.h> > - > -/* TODO: move these to correct files */ > - > -typedef uint64_t odp_phys_addr_t; > - > -#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > -#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > - > -#define ODP_BUFS_PER_CHUNK 16 > -#define ODP_BUFS_PER_SCATTER 4 > - > -#define ODP_BUFFER_TYPE_CHUNK 0xffff > - > +#include <odp_config.h> > +#include <odp_byteorder.h> > +#include <odp_thread.h> > + > + > +#define ODP_BUFFER_MAX_SEG (ODP_CONFIG_BUF_MAX_SIZE/ODP_CONFIG_BUF_SEG_SIZE) > +#define ODP_MAX_INLINE_BUF (sizeof(void *) * (ODP_BUFFER_MAX_SEG - 1)) > + > +ODP_STATIC_ASSERT((ODP_CONFIG_BUF_SEG_SIZE % ODP_CACHE_LINE_SIZE) == 0, > + "ODP Segment size must be a multiple of cache line size"); > + > +#define ODP_SEGBITS(x) \ > + ((x) < 2 ? 1 : \ > + ((x) < 4 ? 2 : \ > + ((x) < 8 ? 3 : \ > + ((x) < 16 ? 4 : \ > + ((x) < 32 ? 5 : \ > + ((x) < 64 ? 6 : \ Do you need to add the tab "6 :<tab>\" > + ((x) < 128 ? 7 : \ > + ((x) < 256 ? 8 : \ > + ((x) < 512 ? 9 : \ > + ((x) < 1024 ? 10 : \ > + ((x) < 2048 ? 11 : \ > + ((x) < 4096 ? 12 : \ > + (0/0))))))))))))) > + > +ODP_STATIC_ASSERT(ODP_SEGBITS(ODP_BUFFER_MAX_SEG) < > + ODP_SEGBITS(ODP_CACHE_LINE_SIZE), > + "Number of segments must not exceed log of cache line size"); > > #define ODP_BUFFER_POOL_BITS 4 > -#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS) > +#define ODP_BUFFER_SEG_BITS ODP_SEGBITS(ODP_CACHE_LINE_SIZE) > +#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS - ODP_BUFFER_SEG_BITS) > +#define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS + ODP_BUFFER_INDEX_BITS) > #define ODP_BUFFER_MAX_POOLS (1 << ODP_BUFFER_POOL_BITS) > #define ODP_BUFFER_MAX_BUFFERS (1 << ODP_BUFFER_INDEX_BITS) > > +#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > +#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > + > typedef union odp_buffer_bits_t { > uint32_t u32; > odp_buffer_t handle; > > struct { > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > uint32_t pool_id:ODP_BUFFER_POOL_BITS; > uint32_t index:ODP_BUFFER_INDEX_BITS; > + uint32_t seg:ODP_BUFFER_SEG_BITS; > +#else > + uint32_t seg:ODP_BUFFER_SEG_BITS; > + uint32_t index:ODP_BUFFER_INDEX_BITS; > + uint32_t pool_id:ODP_BUFFER_POOL_BITS; > +#endif and this will work on 64bit platforms? > }; > -} odp_buffer_bits_t; > > + struct { > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > +#else > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > +#endif > + }; > +} odp_buffer_bits_t; > > /* forward declaration */ > struct odp_buffer_hdr_t; > > - > -/* > - * Scatter/gather list of buffers > - */ > -typedef struct odp_buffer_scatter_t { > - /* buffer pointers */ > - struct odp_buffer_hdr_t *buf[ODP_BUFS_PER_SCATTER]; > - int num_bufs; /* num buffers */ > - int pos; /* position on the list */ > - size_t total_len; /* Total length */ > -} odp_buffer_scatter_t; > - > - > -/* > - * Chunk of buffers (in single pool) > - */ > -typedef struct odp_buffer_chunk_t { > - uint32_t num_bufs; /* num buffers */ > - uint32_t buf_index[ODP_BUFS_PER_CHUNK]; /* buffers */ > -} odp_buffer_chunk_t; > - > - > /* Common buffer header */ > typedef struct odp_buffer_hdr_t { > struct odp_buffer_hdr_t *next; /* next buf in a list */ > + int allocator; /* allocating thread id */ > odp_buffer_bits_t handle; /* handle */ > - odp_phys_addr_t phys_addr; /* physical data start address */ > - void *addr; /* virtual data start address */ > - uint32_t index; /* buf index in the pool */ > + union { > + uint32_t all; > + struct { > + uint32_t zeroized:1; /* Zeroize buf data on free */ > + uint32_t hdrdata:1; /* Data is in buffer hdr */ > + }; > + } flags; > + int type; /* buffer type */ > size_t size; /* max data size */ > - size_t cur_offset; /* current offset */ > odp_atomic_u32_t ref_count; /* reference count */ > - odp_buffer_scatter_t scatter; /* Scatter/gather list */ > - int type; /* type of next header */ > odp_buffer_pool_t pool_hdl; /* buffer pool handle */ > - > + union { > + void *buf_ctx; /* user context */ > + void *udata_addr; /* user metadata addr */ > + }; > + size_t udata_size; /* size of user metadata */ > + uint32_t segcount; /* segment count */ > + uint32_t segsize; /* segment size */ > + void *addr[ODP_BUFFER_MAX_SEG]; /* block addrs */ > } odp_buffer_hdr_t; > > -/* Ensure next header starts from 8 byte align */ > -ODP_STATIC_ASSERT((sizeof(odp_buffer_hdr_t) % 8) == 0, "ODP_BUFFER_HDR_T__SIZE_ERROR"); > +typedef struct odp_buffer_hdr_stride { > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t))]; > +} odp_buffer_hdr_stride; > > +typedef struct odp_buf_blk_t { > + struct odp_buf_blk_t *next; > + struct odp_buf_blk_t *prev; > +} odp_buf_blk_t; > > /* Raw buffer header */ > typedef struct { > odp_buffer_hdr_t buf_hdr; /* common buffer header */ > - uint8_t buf_data[]; /* start of buffer data area */ > } odp_raw_buffer_hdr_t; > > - > -/* Chunk header */ > -typedef struct odp_buffer_chunk_hdr_t { > - odp_buffer_hdr_t buf_hdr; > - odp_buffer_chunk_t chunk; > -} odp_buffer_chunk_hdr_t; > - > - > -int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > - > -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t buf_src); > - > +/* Forward declarations */ > +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool, size_t size); > > #ifdef __cplusplus > } > diff --git a/platform/linux-generic/include/odp_buffer_pool_internal.h b/platform/linux-generic/include/odp_buffer_pool_internal.h > index e0210bd..cd58f91 100644 > --- a/platform/linux-generic/include/odp_buffer_pool_internal.h > +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h > @@ -25,6 +25,35 @@ extern "C" { > #include <odp_hints.h> > #include <odp_config.h> > #include <odp_debug.h> > +#include <odp_shared_memory.h> > +#include <odp_atomic.h> > +#include <odp_atomic_internal.h> > +#include <string.h> > + > +/** > + * Buffer initialization routine prototype > + * > + * @note Routines of this type MAY be passed as part of the > + * _odp_buffer_pool_init_t structure to be called whenever a > + * buffer is allocated to initialize the user metadata > + * associated with that buffer. > + */ > +typedef void (_odp_buf_init_t)(odp_buffer_t buf, void *buf_init_arg); > + > +/** > + * Buffer pool initialization parameters > + * > + * @param[in] udata_size Size of the user metadata for each buffer > + * @param[in] buf_init Function pointer to be called to initialize the > + * user metadata for each buffer in the pool. > + * @param[in] buf_init_arg Argument to be passed to buf_init(). > + * > + */ > +typedef struct _odp_buffer_pool_init_t { > + size_t udata_size; /**< Size of user metadata for each buffer */ > + _odp_buf_init_t *buf_init; /**< Buffer initialization routine to use */ > + void *buf_init_arg; /**< Argument to be passed to buf_init() */ > +} _odp_buffer_pool_init_t; /**< Type of buffer initialization struct */ > > /* Use ticketlock instead of spinlock */ > #define POOL_USE_TICKETLOCK > @@ -39,6 +68,17 @@ extern "C" { > #include <odp_spinlock.h> > #endif > > +#ifdef POOL_USE_TICKETLOCK > +#include <odp_ticketlock.h> > +#define LOCK(a) odp_ticketlock_lock(a) > +#define UNLOCK(a) odp_ticketlock_unlock(a) > +#define LOCK_INIT(a) odp_ticketlock_init(a) > +#else > +#include <odp_spinlock.h> > +#define LOCK(a) odp_spinlock_lock(a) > +#define UNLOCK(a) odp_spinlock_unlock(a) > +#define LOCK_INIT(a) odp_spinlock_init(a) > +#endif > > struct pool_entry_s { > #ifdef POOL_USE_TICKETLOCK > @@ -47,66 +87,224 @@ struct pool_entry_s { > odp_spinlock_t lock ODP_ALIGNED_CACHE; > #endif > > - odp_buffer_chunk_hdr_t *head; > - uint64_t free_bufs; > char name[ODP_BUFFER_POOL_NAME_LEN]; > - > - odp_buffer_pool_t pool_hdl ODP_ALIGNED_CACHE; > - uintptr_t buf_base; > - size_t buf_size; > - size_t buf_offset; > - uint64_t num_bufs; > - void *pool_base_addr; > - uint64_t pool_size; > - size_t user_size; > - size_t user_align; > - int buf_type; > - size_t hdr_size; > + odp_buffer_pool_param_t params; > + _odp_buffer_pool_init_t init_params; > + odp_buffer_pool_t pool_hdl; > + odp_shm_t pool_shm; > + union { > + uint32_t all; > + struct { > + uint32_t has_name:1; > + uint32_t user_supplied_shm:1; > + uint32_t unsegmented:1; > + uint32_t zeroized:1; > + uint32_t quiesced:1; > + uint32_t low_wm_assert:1; > + uint32_t predefined:1; > + }; > + } flags; > + uint8_t *pool_base_addr; > + size_t pool_size; > + uint32_t buf_stride; > + _odp_atomic_ptr_t buf_freelist; > + _odp_atomic_ptr_t blk_freelist; > + odp_atomic_u32_t bufcount; > + odp_atomic_u32_t blkcount; > + odp_atomic_u64_t bufallocs; > + odp_atomic_u64_t buffrees; > + odp_atomic_u64_t blkallocs; > + odp_atomic_u64_t blkfrees; > + odp_atomic_u64_t bufempty; > + odp_atomic_u64_t blkempty; > + odp_atomic_u64_t high_wm_count; > + odp_atomic_u64_t low_wm_count; > + size_t seg_size; > + size_t high_wm; > + size_t low_wm; > + size_t headroom; > + size_t tailroom; General comment add the same level of information into the variable names. Not consistent use "_" used to separate words in variable names. > }; > > +typedef union pool_entry_u { > + struct pool_entry_s s; > + > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct pool_entry_s))]; > +} pool_entry_t; > > extern void *pool_entry_ptr[]; > > +#if defined(ODP_CONFIG_SECURE_POOLS) && (ODP_CONFIG_SECURE_POOLS == 1) > +#define buffer_is_secure(buf) (buf->flags.zeroized) > +#define pool_is_secure(pool) (pool->flags.zeroized) > +#else > +#define buffer_is_secure(buf) 0 > +#define pool_is_secure(pool) 0 > +#endif > + > +#define TAG_ALIGN ((size_t)16) > > -static inline void *get_pool_entry(uint32_t pool_id) > +#define odp_cs(ptr, old, new) \ > + _odp_atomic_ptr_cmp_xchg_strong(&ptr, (void **)&old, (void *)new, \ > + _ODP_MEMMODEL_SC, \ > + _ODP_MEMMODEL_SC) > + > +/* Helper functions for pointer tagging to avoid ABA race conditions */ > +#define odp_tag(ptr) \ > + (((size_t)ptr) & (TAG_ALIGN - 1)) > + > +#define odp_detag(ptr) \ > + ((typeof(ptr))(((size_t)ptr) & -TAG_ALIGN)) > + > +#define odp_retag(ptr, tag) \ > + ((typeof(ptr))(((size_t)ptr) | odp_tag(tag))) > + > + > +static inline void *get_blk(struct pool_entry_s *pool) > { > - return pool_entry_ptr[pool_id]; > + void *oldhead, *myhead, *newhead; > + > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, _ODP_MEMMODEL_ACQ); > + > + do { > + size_t tag = odp_tag(oldhead); > + myhead = odp_detag(oldhead); > + if (myhead == NULL) > + break; > + newhead = odp_retag(((odp_buf_blk_t *)myhead)->next, tag + 1); > + } while (odp_cs(pool->blk_freelist, oldhead, newhead) == 0); > + > + if (myhead == NULL) { > + odp_atomic_inc_u64(&pool->blkempty); > + } else { > + uint64_t blkcount = > + odp_atomic_fetch_sub_u32(&pool->blkcount, 1); > + > + /* Check for low watermark condition */ > + if (blkcount == pool->low_wm) { > + LOCK(&pool->lock); > + if (blkcount <= pool->low_wm && > + !pool->flags.low_wm_assert) { > + pool->flags.low_wm_assert = 1; > + odp_atomic_inc_u64(&pool->low_wm_count); > + } > + UNLOCK(&pool->lock); > + } > + odp_atomic_inc_u64(&pool->blkallocs); > + } > + > + return (void *)myhead; > } > > +static inline void ret_blk(struct pool_entry_s *pool, void *block) > +{ > + void *oldhead, *myhead, *myblock; > + > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, _ODP_MEMMODEL_ACQ); > > -static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > + do { > + size_t tag = odp_tag(oldhead); > + myhead = odp_detag(oldhead); > + ((odp_buf_blk_t *)block)->next = myhead; > + myblock = odp_retag(block, tag + 1); > + } while (odp_cs(pool->blk_freelist, oldhead, myblock) == 0); > + > + odp_atomic_inc_u64(&pool->blkfrees); > + uint64_t blkcount = odp_atomic_fetch_add_u32(&pool->blkcount, 1); Move uint64_t up with next to all the other globaly declared variables for this function. Some comments to start with. =) Cheers, Anders
On Tue, Dec 2, 2014 at 3:05 PM, Anders Roxell <anders.roxell@linaro.org> wrote: > prefix this patch with: > api: ... > > On 2014-12-02 13:17, Bill Fischofer wrote: > > Restructure ODP buffer pool internals to support new APIs. > > The comment doesn't add any extra value from the short log. > "Modifys linux-generic, example and test to make them ready for adding the > new odp_buffer_pool_create API" The comment is descriptive of what's in the patch. > > > Implements new odp_buffer_pool_create() API. > > > > Signed-off-by: Bill Fischofer <bill.fischofer@linaro.org> > > --- > > example/generator/odp_generator.c | 19 +- > > example/ipsec/odp_ipsec.c | 57 +- > > example/l2fwd/odp_l2fwd.c | 19 +- > > example/odp_example/odp_example.c | 18 +- > > example/packet/odp_pktio.c | 19 +- > > example/timer/odp_timer_test.c | 13 +- > > .../linux-generic/include/api/odp_buffer_pool.h | 91 ++- > > platform/linux-generic/include/api/odp_config.h | 10 + > > .../linux-generic/include/api/odp_platform_types.h | 9 + > > Group stuff into odp_platform_types.h should be its own patch. > > The change to odp_platform_types.h moves typedefs from odp_shared_memory.h to break circular dependencies that would otherwise arise. As a result, this is not separable from the rest of this patch. > > .../linux-generic/include/api/odp_shared_memory.h | 10 +- > > .../linux-generic/include/odp_buffer_inlines.h | 157 +++++ > > Creating an inline file should be its own patch. > No, it's not independent of the rest of these changes. This is a restructuring patch. The rule that you've promoted is that each patch can be applied independently. Trying to make this it's own patch wouldn't follow that rule. > > > .../linux-generic/include/odp_buffer_internal.h | 137 ++-- > > .../include/odp_buffer_pool_internal.h | 278 ++++++-- > > .../linux-generic/include/odp_packet_internal.h | 50 +- > > .../linux-generic/include/odp_timer_internal.h | 11 +- > > platform/linux-generic/odp_buffer.c | 31 +- > > platform/linux-generic/odp_buffer_pool.c | 711 > +++++++++------------ > > platform/linux-generic/odp_packet.c | 41 +- > > platform/linux-generic/odp_queue.c | 1 + > > platform/linux-generic/odp_schedule.c | 20 +- > > platform/linux-generic/odp_timer.c | 3 +- > > test/api_test/odp_timer_ping.c | 19 +- > > test/validation/odp_crypto.c | 43 +- > > test/validation/odp_queue.c | 19 +- > > 24 files changed, 1024 insertions(+), 762 deletions(-) > > create mode 100644 platform/linux-generic/include/odp_buffer_inlines.h > > > > [...] > > > diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h > b/platform/linux-generic/include/api/odp_buffer_pool.h > > index 30b83e0..7022daa 100644 > > --- a/platform/linux-generic/include/api/odp_buffer_pool.h > > +++ b/platform/linux-generic/include/api/odp_buffer_pool.h > > @@ -36,32 +36,101 @@ extern "C" { > > #define ODP_BUFFER_POOL_INVALID 0 > > > > /** > > + * Buffer pool parameters > > + * Used to communicate buffer pool creation options. > > + */ > > +typedef struct odp_buffer_pool_param_t { > > + size_t buf_size; /**< Buffer size in bytes. The maximum > > + number of bytes application will > > "...bytes the application..." > The definite article is optional in english grammar here. This level of nit-picking isn't needed. > > > + store in each buffer. */ > > + size_t buf_align; /**< Minimum buffer alignment in bytes. > > + Valid values are powers of two. Use 0 > > + for default alignment. Default will > > + always be a multiple of 8. */ > > + uint32_t num_bufs; /**< Number of buffers in the pool */ > > + int buf_type; /**< Buffer type */ > > +} odp_buffer_pool_param_t; > > + > > +/** > > * Create a buffer pool > > + * This routine is used to create a buffer pool. It take three > > + * arguments: the optional name of the pool to be created, an optional > shared > > + * memory handle, and a parameter struct that describes the pool to be > > + * created. If a name is not specified the result is an anonymous pool > that > > + * cannot be referenced by odp_buffer_pool_lookup(). > > * > > - * @param name Name of the pool (max ODP_BUFFER_POOL_NAME_LEN - 1 > chars) > > - * @param base_addr Pool base address > > - * @param size Pool size in bytes > > - * @param buf_size Buffer size in bytes > > - * @param buf_align Minimum buffer alignment > > - * @param buf_type Buffer type > > + * @param[in] name Name of the pool, max ODP_BUFFER_POOL_NAME_LEN-1 > chars. > > + * May be specified as NULL for anonymous pools. > > * > > - * @return Buffer pool handle > > + * @param[in] shm The shared memory object in which to create the > pool. > > + * Use ODP_SHM_NULL to reserve default memory type > > + * for the buffer type. > > + * > > + * @param[in] params Buffer pool parameters. > > + * > > + * @return Buffer pool handle or ODP_BUFFER_POOL_INVALID if call failed. > > Should be > @retval Buffer pool handle on success > @retval ODP_BUFFER_POOL_INVALID if call failed 1 (if it can fail list the > reasons) > @retval ODP_BUFFER_POOL_INVALID if call failed 2 (if it can fail list the > reasons) > @retval ODP_BUFFER_POOL_INVALID if call failed N > The documentation is consistent with that used in the rest of the file. If we want a doc cleanup patch that should be a separate patch and cover the whole file, not just one routine that would otherwise stand out as an anomaly. I'll be happy to write that after this patch gets merged. > > > */ > > + > > odp_buffer_pool_t odp_buffer_pool_create(const char *name, > > - void *base_addr, uint64_t size, > > - size_t buf_size, size_t buf_align, > > - int buf_type); > > + odp_shm_t shm, > > + odp_buffer_pool_param_t *params); > > > > +/** > > + * Destroy a buffer pool previously created by odp_buffer_pool_create() > > + * > > + * @param[in] pool Handle of the buffer pool to be destroyed > > + * > > + * @return 0 on Success, -1 on Failure. > > use @retval here as well and list the reasons how it can fail.] > Same comment as above. > > > + * > > + * @note This routine destroys a previously created buffer pool. This > call > > + * does not destroy any shared memory object passed to > > + * odp_buffer_pool_create() used to store the buffer pool contents. The > caller > > + * takes responsibility for that. If no shared memory object was passed > as > > + * part of the create call, then this routine will destroy any internal > shared > > + * memory objects associated with the buffer pool. Results are > undefined if > > + * an attempt is made to destroy a buffer pool that contains allocated > or > > + * otherwise active buffers. > > + */ > > +int odp_buffer_pool_destroy(odp_buffer_pool_t pool); > > This doesn't belong in this patch, belongs in the > odp_buffer_pool_destroy patch. > > That patch is for the implementation of the function, as described. This is benign here. > > > > /** > > * Find a buffer pool by name > > * > > - * @param name Name of the pool > > + * @param[in] name Name of the pool > > * > > * @return Buffer pool handle, or ODP_BUFFER_POOL_INVALID if not found. > > Fix this. > Same comments as above. > > > + * > > + * @note This routine cannot be used to look up an anonymous pool (one > created > > + * with no name). > > How can I delete an anonymous pool? > You can't. This is just implementing what's been specified. If we want to change the spec that can be addressed in a follow-on patch. > > > */ > > odp_buffer_pool_t odp_buffer_pool_lookup(const char *name); > > > > +/** > > + * Buffer pool information struct > > + * Used to get information about a buffer pool. > > + */ > > +typedef struct odp_buffer_pool_info_t { > > + const char *name; /**< pool name */ > > + odp_buffer_pool_param_t params; /**< pool parameters */ > > +} odp_buffer_pool_info_t; > > + > > +/** > > + * Retrieve information about a buffer pool > > + * > > + * @param[in] pool Buffer pool handle > > + * > > + * @param[out] shm Recieves odp_shm_t supplied by caller at > > + * pool creation, or ODP_SHM_NULL if the > > + * pool is managed internally. > > + * > > + * @param[out] info Receives an odp_buffer_pool_info_t object > > + * that describes the pool. > > + * > > + * @return 0 on success, -1 if info could not be retrieved. > > Fix > Same doc comments as above. > > > + */ > > + > > +int odp_buffer_pool_info(odp_buffer_pool_t pool, odp_shm_t *shm, > > + odp_buffer_pool_info_t *info); > > This doesn't belong in this patch, belongs in the > odp_buffer_pool_info patch. > > Again, the separate patch implements these functions. These are benign. > > > > /** > > * Print buffer pool info > > diff --git a/platform/linux-generic/include/api/odp_config.h > b/platform/linux-generic/include/api/odp_config.h > > index 906897c..1226d37 100644 > > --- a/platform/linux-generic/include/api/odp_config.h > > +++ b/platform/linux-generic/include/api/odp_config.h > > @@ -49,6 +49,16 @@ extern "C" { > > #define ODP_CONFIG_PKTIO_ENTRIES 64 > > > > /** > > + * Segment size to use - > > What does "-" mean? > Can you elaborate more on this? > It's a stray character. > > > + */ > > +#define ODP_CONFIG_BUF_SEG_SIZE (512*3) > > + > > +/** > > + * Maximum buffer size supported > > + */ > > +#define ODP_CONFIG_BUF_MAX_SIZE (ODP_CONFIG_BUF_SEG_SIZE*7) > > Isn't this platform specific? > Yes, and this is platform/linux-generic. I've chosen this for now because the current linux-generic packet I/O doesn't support scatter/gather reads/writes. > > > + > > +/** > > * @} > > */ > > > > diff --git a/platform/linux-generic/include/api/odp_platform_types.h > b/platform/linux-generic/include/api/odp_platform_types.h > > index 4db47d3..b9b3aea 100644 > > --- a/platform/linux-generic/include/api/odp_platform_types.h > > +++ b/platform/linux-generic/include/api/odp_platform_types.h > > @@ -65,6 +65,15 @@ typedef uint32_t odp_pktio_t; > > #define ODP_PKTIO_ANY ((odp_pktio_t)~0) > > > > /** > > + * ODP shared memory block > > + */ > > +typedef uint32_t odp_shm_t; > > + > > +/** Invalid shared memory block */ > > +#define ODP_SHM_INVALID 0 > > +#define ODP_SHM_NULL ODP_SHM_INVALID /**< Synonym for buffer pool use */ > > ODP_SHM_* touches shm functionality and should be in its own patch to > fix/move it. > Already discussed above. > > > + > > +/** > > * @} > > */ > > > > diff --git a/platform/linux-generic/include/api/odp_shared_memory.h > b/platform/linux-generic/include/api/odp_shared_memory.h > > index 26e208b..f70db5a 100644 > > --- a/platform/linux-generic/include/api/odp_shared_memory.h > > +++ b/platform/linux-generic/include/api/odp_shared_memory.h > > @@ -20,6 +20,7 @@ extern "C" { > > > > > > #include <odp_std_types.h> > > +#include <odp_platform_types.h> > > Not relevant for the odp_buffer_pool_create > Incorrect. It is part of the restructure for reasons discussed above. > > > > > /** @defgroup odp_shared_memory ODP SHARED MEMORY > > * Operations on shared memory. > > @@ -38,15 +39,6 @@ extern "C" { > > #define ODP_SHM_PROC 0x2 /**< Share with external processes */ > > > > /** > > - * ODP shared memory block > > - */ > > -typedef uint32_t odp_shm_t; > > - > > -/** Invalid shared memory block */ > > -#define ODP_SHM_INVALID 0 > > - > > - > > -/** > > * Shared memory block info > > */ > > typedef struct odp_shm_info_t { > > diff --git a/platform/linux-generic/include/odp_buffer_inlines.h > b/platform/linux-generic/include/odp_buffer_inlines.h > > new file mode 100644 > > index 0000000..f33b41d > > --- /dev/null > > +++ b/platform/linux-generic/include/odp_buffer_inlines.h > > @@ -0,0 +1,157 @@ > > +/* Copyright (c) 2014, Linaro Limited > > + * All rights reserved. > > + * > > + * SPDX-License-Identifier: BSD-3-Clause > > + */ > > + > > +/** > > + * @file > > + * > > + * Inline functions for ODP buffer mgmt routines - implementation > internal > > + */ > > + > > +#ifndef ODP_BUFFER_INLINES_H_ > > +#define ODP_BUFFER_INLINES_H_ > > + > > +#ifdef __cplusplus > > +extern "C" { > > +#endif > > + > > +static inline odp_buffer_t odp_buffer_encode_handle(odp_buffer_hdr_t > *hdr) > > +{ > > + odp_buffer_bits_t handle; > > + uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl); > > + struct pool_entry_s *pool = get_pool_entry(pool_id); > > + > > + handle.pool_id = pool_id; > > + handle.index = ((uint8_t *)hdr - pool->pool_base_addr) / > > + ODP_CACHE_LINE_SIZE; > > + handle.seg = 0; > > + > > + return handle.u32; > > +} > > + > > +static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) > > +{ > > + odp_buffer_t hdl = odp_buffer_encode_handle(hdr); > > + if (hdl != hdr->handle.handle) { > > + ODP_DBG("buf %p should have handle %x but is cached as > %x\n", > > + hdr, hdl, hdr->handle.handle); > > + hdr->handle.handle = hdl; > > + } > > + return hdr->handle.handle; > > +} > > + > > +static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > > +{ > > + odp_buffer_bits_t handle; > > + uint32_t pool_id; > > + uint32_t index; > > + struct pool_entry_s *pool; > > + > > + handle.u32 = buf; > > + pool_id = handle.pool_id; > > + index = handle.index; > > + > > +#ifdef POOL_ERROR_CHECK > > + if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { > > + ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); > > + return NULL; > > + } > > +#endif > > + > > + pool = get_pool_entry(pool_id); > > + > > +#ifdef POOL_ERROR_CHECK > > + if (odp_unlikely(index > pool->params.num_bufs - 1)) { > > + ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); > > + return NULL; > > + } > > +#endif > > + > > + return (odp_buffer_hdr_t *)(void *) > > + (pool->pool_base_addr + (index * ODP_CACHE_LINE_SIZE)); > > +} > > + > > +static inline uint32_t odp_buffer_refcount(odp_buffer_hdr_t *buf) > > +{ > > + return odp_atomic_load_u32(&buf->ref_count); > > +} > > + > > +static inline uint32_t odp_buffer_incr_refcount(odp_buffer_hdr_t *buf, > > + uint32_t val) > > +{ > > + return odp_atomic_fetch_add_u32(&buf->ref_count, val) + val; > > +} > > + > > +static inline uint32_t odp_buffer_decr_refcount(odp_buffer_hdr_t *buf, > > + uint32_t val) > > +{ > > + uint32_t tmp; > > + > > + tmp = odp_atomic_fetch_sub_u32(&buf->ref_count, val); > > + > > + if (tmp < val) { > > + odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); > > + return 0; > > + } else { > > drop the else statement > That would be erroneous code. Refcounts don't go below 0. This code ensures that. > > > + return tmp - val; > > + } > > +} > > + > > +static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) > > +{ > > + odp_buffer_bits_t handle; > > + odp_buffer_hdr_t *buf_hdr; > > + handle.u32 = buf; > > + > > + /* For buffer handles, segment index must be 0 */ > > Why does the buffer handle always have to have a segment index that must > be 0? > Because that's how I've defined it in this implementation. validate_buffer() can be given any 32-bit value and it will robustly say whether or not it is a valid buffer handle. > > > + if (handle.seg != 0) > > + return NULL; > > Why do we need to check everything? > shouldn't we trust our internal stuff to be sent correctly? > Maybe it should be an ODP_ASSERT? > No, odp_buffer_is_valid() does not assert. It returns a yes/no value for any input value. > > > + > > + pool_entry_t *pool = odp_pool_to_entry(handle.pool_id); > > + > > + /* If pool not created, handle is invalid */ > > + if (pool->s.pool_shm == ODP_SHM_INVALID) > > + return NULL; > > The same applies here. > Same answer. > > > + > > + uint32_t buf_stride = pool->s.buf_stride / ODP_CACHE_LINE_SIZE; > > + > > + /* A valid buffer index must be on stride, and must be in range */ > > + if ((handle.index % buf_stride != 0) || > > + ((uint32_t)(handle.index / buf_stride) >= > pool->s.params.num_bufs)) > > + return NULL; > > + > > + buf_hdr = (odp_buffer_hdr_t *)(void *) > > + (pool->s.pool_base_addr + > > + (handle.index * ODP_CACHE_LINE_SIZE)); > > + > > + /* Handle is valid, so buffer is valid if it is allocated */ > > + if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) > > + return NULL; > > + else > > Drop the else > No, that would be erroneous. A buffer handle is no longer valid if the buffer has been freed. That's what's being checked here. > > > + return buf_hdr; > > +} > > + > > +int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > > + > > +static inline void *buffer_map(odp_buffer_hdr_t *buf, > > + size_t offset, > > + size_t *seglen, > > + size_t limit) > > +{ > > + int seg_index = offset / buf->segsize; > > + int seg_offset = offset % buf->segsize; > > + size_t buf_left = limit - offset; > > + > > + *seglen = buf_left < buf->segsize ? > > + buf_left : buf->segsize - seg_offset; > > + > > + return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]); > > +} > > + > > +#ifdef __cplusplus > > +} > > +#endif > > + > > +#endif > > diff --git a/platform/linux-generic/include/odp_buffer_internal.h > b/platform/linux-generic/include/odp_buffer_internal.h > > index 0027bfc..29666db 100644 > > --- a/platform/linux-generic/include/odp_buffer_internal.h > > +++ b/platform/linux-generic/include/odp_buffer_internal.h > > @@ -24,99 +24,118 @@ extern "C" { > > #include <odp_buffer.h> > > #include <odp_debug.h> > > #include <odp_align.h> > > - > > -/* TODO: move these to correct files */ > > - > > -typedef uint64_t odp_phys_addr_t; > > - > > -#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > > -#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > > - > > -#define ODP_BUFS_PER_CHUNK 16 > > -#define ODP_BUFS_PER_SCATTER 4 > > - > > -#define ODP_BUFFER_TYPE_CHUNK 0xffff > > - > > +#include <odp_config.h> > > +#include <odp_byteorder.h> > > +#include <odp_thread.h> > > + > > + > > +#define ODP_BUFFER_MAX_SEG > (ODP_CONFIG_BUF_MAX_SIZE/ODP_CONFIG_BUF_SEG_SIZE) > > +#define ODP_MAX_INLINE_BUF (sizeof(void *) * (ODP_BUFFER_MAX_SEG - > 1)) > > + > > +ODP_STATIC_ASSERT((ODP_CONFIG_BUF_SEG_SIZE % ODP_CACHE_LINE_SIZE) == 0, > > + "ODP Segment size must be a multiple of cache line > size"); > > + > > +#define ODP_SEGBITS(x) \ > > + ((x) < 2 ? 1 : \ > > + ((x) < 4 ? 2 : \ > > + ((x) < 8 ? 3 : \ > > + ((x) < 16 ? 4 : \ > > + ((x) < 32 ? 5 : \ > > + ((x) < 64 ? 6 : \ > > Do you need to add the tab "6 :<tab>\" > I'm not sure I understand the comment. > > > + ((x) < 128 ? 7 : \ > > + ((x) < 256 ? 8 : \ > > + ((x) < 512 ? 9 : \ > > + ((x) < 1024 ? 10 : \ > > + ((x) < 2048 ? 11 : \ > > + ((x) < 4096 ? 12 : \ > > + (0/0))))))))))))) > > + > > +ODP_STATIC_ASSERT(ODP_SEGBITS(ODP_BUFFER_MAX_SEG) < > > + ODP_SEGBITS(ODP_CACHE_LINE_SIZE), > > + "Number of segments must not exceed log of cache line > size"); > > > > #define ODP_BUFFER_POOL_BITS 4 > > -#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS) > > +#define ODP_BUFFER_SEG_BITS ODP_SEGBITS(ODP_CACHE_LINE_SIZE) > > +#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS - > ODP_BUFFER_SEG_BITS) > > +#define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS + > ODP_BUFFER_INDEX_BITS) > > #define ODP_BUFFER_MAX_POOLS (1 << ODP_BUFFER_POOL_BITS) > > #define ODP_BUFFER_MAX_BUFFERS (1 << ODP_BUFFER_INDEX_BITS) > > > > +#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > > +#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > > + > > typedef union odp_buffer_bits_t { > > uint32_t u32; > > odp_buffer_t handle; > > > > struct { > > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > > uint32_t pool_id:ODP_BUFFER_POOL_BITS; > > uint32_t index:ODP_BUFFER_INDEX_BITS; > > + uint32_t seg:ODP_BUFFER_SEG_BITS; > > +#else > > + uint32_t seg:ODP_BUFFER_SEG_BITS; > > + uint32_t index:ODP_BUFFER_INDEX_BITS; > > + uint32_t pool_id:ODP_BUFFER_POOL_BITS; > > +#endif > > and this will work on 64bit platforms? > Yes. I'm developing on a 64-bit platform. > > > }; > > -} odp_buffer_bits_t; > > > > + struct { > > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > > +#else > > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > > +#endif > > + }; > > +} odp_buffer_bits_t; > > > > /* forward declaration */ > > struct odp_buffer_hdr_t; > > > > - > > -/* > > - * Scatter/gather list of buffers > > - */ > > -typedef struct odp_buffer_scatter_t { > > - /* buffer pointers */ > > - struct odp_buffer_hdr_t *buf[ODP_BUFS_PER_SCATTER]; > > - int num_bufs; /* num buffers */ > > - int pos; /* position on the list */ > > - size_t total_len; /* Total length */ > > -} odp_buffer_scatter_t; > > - > > - > > -/* > > - * Chunk of buffers (in single pool) > > - */ > > -typedef struct odp_buffer_chunk_t { > > - uint32_t num_bufs; /* num buffers */ > > - uint32_t buf_index[ODP_BUFS_PER_CHUNK]; /* buffers */ > > -} odp_buffer_chunk_t; > > - > > - > > /* Common buffer header */ > > typedef struct odp_buffer_hdr_t { > > struct odp_buffer_hdr_t *next; /* next buf in a list */ > > + int allocator; /* allocating thread id */ > > odp_buffer_bits_t handle; /* handle */ > > - odp_phys_addr_t phys_addr; /* physical data start > address */ > > - void *addr; /* virtual data start address > */ > > - uint32_t index; /* buf index in the pool */ > > + union { > > + uint32_t all; > > + struct { > > + uint32_t zeroized:1; /* Zeroize buf data on free */ > > + uint32_t hdrdata:1; /* Data is in buffer hdr */ > > + }; > > + } flags; > > + int type; /* buffer type */ > > size_t size; /* max data size */ > > - size_t cur_offset; /* current offset */ > > odp_atomic_u32_t ref_count; /* reference count */ > > - odp_buffer_scatter_t scatter; /* Scatter/gather list */ > > - int type; /* type of next header */ > > odp_buffer_pool_t pool_hdl; /* buffer pool handle */ > > - > > + union { > > + void *buf_ctx; /* user context */ > > + void *udata_addr; /* user metadata addr */ > > + }; > > + size_t udata_size; /* size of user metadata */ > > + uint32_t segcount; /* segment count */ > > + uint32_t segsize; /* segment size */ > > + void *addr[ODP_BUFFER_MAX_SEG]; /* block addrs > */ > > } odp_buffer_hdr_t; > > > > -/* Ensure next header starts from 8 byte align */ > > -ODP_STATIC_ASSERT((sizeof(odp_buffer_hdr_t) % 8) == 0, > "ODP_BUFFER_HDR_T__SIZE_ERROR"); > > +typedef struct odp_buffer_hdr_stride { > > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t))]; > > +} odp_buffer_hdr_stride; > > > > +typedef struct odp_buf_blk_t { > > + struct odp_buf_blk_t *next; > > + struct odp_buf_blk_t *prev; > > +} odp_buf_blk_t; > > > > /* Raw buffer header */ > > typedef struct { > > odp_buffer_hdr_t buf_hdr; /* common buffer header */ > > - uint8_t buf_data[]; /* start of buffer data area */ > > } odp_raw_buffer_hdr_t; > > > > - > > -/* Chunk header */ > > -typedef struct odp_buffer_chunk_hdr_t { > > - odp_buffer_hdr_t buf_hdr; > > - odp_buffer_chunk_t chunk; > > -} odp_buffer_chunk_hdr_t; > > - > > - > > -int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > > - > > -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t > buf_src); > > - > > +/* Forward declarations */ > > +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool, size_t size); > > > > #ifdef __cplusplus > > } > > diff --git a/platform/linux-generic/include/odp_buffer_pool_internal.h > b/platform/linux-generic/include/odp_buffer_pool_internal.h > > index e0210bd..cd58f91 100644 > > --- a/platform/linux-generic/include/odp_buffer_pool_internal.h > > +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h > > @@ -25,6 +25,35 @@ extern "C" { > > #include <odp_hints.h> > > #include <odp_config.h> > > #include <odp_debug.h> > > +#include <odp_shared_memory.h> > > +#include <odp_atomic.h> > > +#include <odp_atomic_internal.h> > > +#include <string.h> > > + > > +/** > > + * Buffer initialization routine prototype > > + * > > + * @note Routines of this type MAY be passed as part of the > > + * _odp_buffer_pool_init_t structure to be called whenever a > > + * buffer is allocated to initialize the user metadata > > + * associated with that buffer. > > + */ > > +typedef void (_odp_buf_init_t)(odp_buffer_t buf, void *buf_init_arg); > > + > > +/** > > + * Buffer pool initialization parameters > > + * > > + * @param[in] udata_size Size of the user metadata for each buffer > > + * @param[in] buf_init Function pointer to be called to > initialize the > > + * user metadata for each buffer in the pool. > > + * @param[in] buf_init_arg Argument to be passed to buf_init(). > > + * > > + */ > > +typedef struct _odp_buffer_pool_init_t { > > + size_t udata_size; /**< Size of user metadata for each > buffer */ > > + _odp_buf_init_t *buf_init; /**< Buffer initialization routine to > use */ > > + void *buf_init_arg; /**< Argument to be passed to > buf_init() */ > > +} _odp_buffer_pool_init_t; /**< Type of buffer initialization > struct */ > > > > /* Use ticketlock instead of spinlock */ > > #define POOL_USE_TICKETLOCK > > @@ -39,6 +68,17 @@ extern "C" { > > #include <odp_spinlock.h> > > #endif > > > > +#ifdef POOL_USE_TICKETLOCK > > +#include <odp_ticketlock.h> > > +#define LOCK(a) odp_ticketlock_lock(a) > > +#define UNLOCK(a) odp_ticketlock_unlock(a) > > +#define LOCK_INIT(a) odp_ticketlock_init(a) > > +#else > > +#include <odp_spinlock.h> > > +#define LOCK(a) odp_spinlock_lock(a) > > +#define UNLOCK(a) odp_spinlock_unlock(a) > > +#define LOCK_INIT(a) odp_spinlock_init(a) > > +#endif > > > > struct pool_entry_s { > > #ifdef POOL_USE_TICKETLOCK > > @@ -47,66 +87,224 @@ struct pool_entry_s { > > odp_spinlock_t lock ODP_ALIGNED_CACHE; > > #endif > > > > - odp_buffer_chunk_hdr_t *head; > > - uint64_t free_bufs; > > char name[ODP_BUFFER_POOL_NAME_LEN]; > > - > > - odp_buffer_pool_t pool_hdl ODP_ALIGNED_CACHE; > > - uintptr_t buf_base; > > - size_t buf_size; > > - size_t buf_offset; > > - uint64_t num_bufs; > > - void *pool_base_addr; > > - uint64_t pool_size; > > - size_t user_size; > > - size_t user_align; > > - int buf_type; > > - size_t hdr_size; > > + odp_buffer_pool_param_t params; > > + _odp_buffer_pool_init_t init_params; > > + odp_buffer_pool_t pool_hdl; > > + odp_shm_t pool_shm; > > + union { > > + uint32_t all; > > + struct { > > + uint32_t has_name:1; > > + uint32_t user_supplied_shm:1; > > + uint32_t unsegmented:1; > > + uint32_t zeroized:1; > > + uint32_t quiesced:1; > > + uint32_t low_wm_assert:1; > > + uint32_t predefined:1; > > + }; > > + } flags; > > + uint8_t *pool_base_addr; > > + size_t pool_size; > > + uint32_t buf_stride; > > + _odp_atomic_ptr_t buf_freelist; > > + _odp_atomic_ptr_t blk_freelist; > > + odp_atomic_u32_t bufcount; > > + odp_atomic_u32_t blkcount; > > + odp_atomic_u64_t bufallocs; > > + odp_atomic_u64_t buffrees; > > + odp_atomic_u64_t blkallocs; > > + odp_atomic_u64_t blkfrees; > > + odp_atomic_u64_t bufempty; > > + odp_atomic_u64_t blkempty; > > + odp_atomic_u64_t high_wm_count; > > + odp_atomic_u64_t low_wm_count; > > + size_t seg_size; > > + size_t high_wm; > > + size_t low_wm; > > + size_t headroom; > > + size_t tailroom; > > General comment add the same level of information into the variable > names. > > Not consistent use "_" used to separate words in variable names. > > These are internal structs. Not relevant. > > > > }; > > > > +typedef union pool_entry_u { > > + struct pool_entry_s s; > > + > > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct > pool_entry_s))]; > > +} pool_entry_t; > > > > extern void *pool_entry_ptr[]; > > > > +#if defined(ODP_CONFIG_SECURE_POOLS) && (ODP_CONFIG_SECURE_POOLS == 1) > > +#define buffer_is_secure(buf) (buf->flags.zeroized) > > +#define pool_is_secure(pool) (pool->flags.zeroized) > > +#else > > +#define buffer_is_secure(buf) 0 > > +#define pool_is_secure(pool) 0 > > +#endif > > + > > +#define TAG_ALIGN ((size_t)16) > > > > -static inline void *get_pool_entry(uint32_t pool_id) > > +#define odp_cs(ptr, old, new) \ > > + _odp_atomic_ptr_cmp_xchg_strong(&ptr, (void **)&old, (void *)new, \ > > + _ODP_MEMMODEL_SC, \ > > + _ODP_MEMMODEL_SC) > > + > > +/* Helper functions for pointer tagging to avoid ABA race conditions */ > > +#define odp_tag(ptr) \ > > + (((size_t)ptr) & (TAG_ALIGN - 1)) > > + > > +#define odp_detag(ptr) \ > > + ((typeof(ptr))(((size_t)ptr) & -TAG_ALIGN)) > > + > > +#define odp_retag(ptr, tag) \ > > + ((typeof(ptr))(((size_t)ptr) | odp_tag(tag))) > > + > > + > > +static inline void *get_blk(struct pool_entry_s *pool) > > { > > - return pool_entry_ptr[pool_id]; > > + void *oldhead, *myhead, *newhead; > > + > > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, > _ODP_MEMMODEL_ACQ); > > + > > + do { > > + size_t tag = odp_tag(oldhead); > > + myhead = odp_detag(oldhead); > > + if (myhead == NULL) > > + break; > > + newhead = odp_retag(((odp_buf_blk_t *)myhead)->next, tag + > 1); > > + } while (odp_cs(pool->blk_freelist, oldhead, newhead) == 0); > > + > > + if (myhead == NULL) { > > + odp_atomic_inc_u64(&pool->blkempty); > > + } else { > > + uint64_t blkcount = > > + odp_atomic_fetch_sub_u32(&pool->blkcount, 1); > > + > > + /* Check for low watermark condition */ > > + if (blkcount == pool->low_wm) { > > + LOCK(&pool->lock); > > + if (blkcount <= pool->low_wm && > > + !pool->flags.low_wm_assert) { > > + pool->flags.low_wm_assert = 1; > > + odp_atomic_inc_u64(&pool->low_wm_count); > > + } > > + UNLOCK(&pool->lock); > > + } > > + odp_atomic_inc_u64(&pool->blkallocs); > > + } > > + > > + return (void *)myhead; > > } > > > > +static inline void ret_blk(struct pool_entry_s *pool, void *block) > > +{ > > + void *oldhead, *myhead, *myblock; > > + > > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, > _ODP_MEMMODEL_ACQ); > > > > -static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > > + do { > > + size_t tag = odp_tag(oldhead); > > + myhead = odp_detag(oldhead); > > + ((odp_buf_blk_t *)block)->next = myhead; > > + myblock = odp_retag(block, tag + 1); > > + } while (odp_cs(pool->blk_freelist, oldhead, myblock) == 0); > > + > > + odp_atomic_inc_u64(&pool->blkfrees); > > + uint64_t blkcount = odp_atomic_fetch_add_u32(&pool->blkcount, 1); > > Move uint64_t up with next to all the other globaly declared variables > for this function. > These are not global variables. > > > Some comments to start with. =) > > Cheers, > Anders >
I have only a few minor comments for this patch. One major query is that Currently this patch does not integrate headroom/tailroom at segment level is the feature available in 1.0? if it is needed to be supported then we need some modifications in segment handling. Regards, Bala On Tue, Dec 02, 2014 at 01:17:01PM -0600, Bill Fischofer wrote: > Restructure ODP buffer pool internals to support new APIs. > Implements new odp_buffer_pool_create() API. > > Signed-off-by: Bill Fischofer <bill.fischofer@linaro.org> > --- > example/generator/odp_generator.c | 19 +- > example/ipsec/odp_ipsec.c | 57 +- > example/l2fwd/odp_l2fwd.c | 19 +- > example/odp_example/odp_example.c | 18 +- > example/packet/odp_pktio.c | 19 +- > example/timer/odp_timer_test.c | 13 +- > .../linux-generic/include/api/odp_buffer_pool.h | 91 ++- > platform/linux-generic/include/api/odp_config.h | 10 + > .../linux-generic/include/api/odp_platform_types.h | 9 + > .../linux-generic/include/api/odp_shared_memory.h | 10 +- > .../linux-generic/include/odp_buffer_inlines.h | 157 +++++ > .../linux-generic/include/odp_buffer_internal.h | 137 ++-- > .../include/odp_buffer_pool_internal.h | 278 ++++++-- > .../linux-generic/include/odp_packet_internal.h | 50 +- > .../linux-generic/include/odp_timer_internal.h | 11 +- > platform/linux-generic/odp_buffer.c | 31 +- > platform/linux-generic/odp_buffer_pool.c | 711 +++++++++------------ > platform/linux-generic/odp_packet.c | 41 +- > platform/linux-generic/odp_queue.c | 1 + > platform/linux-generic/odp_schedule.c | 20 +- > platform/linux-generic/odp_timer.c | 3 +- > test/api_test/odp_timer_ping.c | 19 +- > test/validation/odp_crypto.c | 43 +- > test/validation/odp_queue.c | 19 +- > 24 files changed, 1024 insertions(+), 762 deletions(-) > create mode 100644 platform/linux-generic/include/odp_buffer_inlines.h > > diff --git a/example/generator/odp_generator.c b/example/generator/odp_generator.c > index 73b0369..476cbef 100644 > --- a/example/generator/odp_generator.c > +++ b/example/generator/odp_generator.c > @@ -522,11 +522,11 @@ int main(int argc, char *argv[]) > odph_linux_pthread_t thread_tbl[MAX_WORKERS]; > odp_buffer_pool_t pool; > int num_workers; > - void *pool_base; > int i; > int first_core; > int core_count; > odp_shm_t shm; > + odp_buffer_pool_param_t params; > > /* Init ODP before calling anything else */ > if (odp_init_global(NULL, NULL)) { > @@ -589,20 +589,13 @@ int main(int argc, char *argv[]) > printf("First core: %i\n\n", first_core); > > /* Create packet pool */ > - shm = odp_shm_reserve("shm_packet_pool", > - SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > - pool_base = odp_shm_addr(shm); > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > + params.buf_align = 0; > + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > - if (pool_base == NULL) { > - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); > - exit(EXIT_FAILURE); > - } > + pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, ¶ms); > > - pool = odp_buffer_pool_create("packet_pool", pool_base, > - SHM_PKT_POOL_SIZE, > - SHM_PKT_POOL_BUF_SIZE, > - ODP_CACHE_LINE_SIZE, > - ODP_BUFFER_TYPE_PACKET); > if (pool == ODP_BUFFER_POOL_INVALID) { > EXAMPLE_ERR("Error: packet pool create failed.\n"); > exit(EXIT_FAILURE); > diff --git a/example/ipsec/odp_ipsec.c b/example/ipsec/odp_ipsec.c > index 76d27c5..f96338c 100644 > --- a/example/ipsec/odp_ipsec.c > +++ b/example/ipsec/odp_ipsec.c > @@ -367,8 +367,7 @@ static > void ipsec_init_pre(void) > { > odp_queue_param_t qparam; > - void *pool_base; > - odp_shm_t shm; > + odp_buffer_pool_param_t params; > > /* > * Create queues > @@ -401,16 +400,12 @@ void ipsec_init_pre(void) > } > > /* Create output buffer pool */ > - shm = odp_shm_reserve("shm_out_pool", > - SHM_OUT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > - > - pool_base = odp_shm_addr(shm); > + params.buf_size = SHM_OUT_POOL_BUF_SIZE; > + params.buf_align = 0; > + params.num_bufs = SHM_PKT_POOL_BUF_COUNT; > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > - out_pool = odp_buffer_pool_create("out_pool", pool_base, > - SHM_OUT_POOL_SIZE, > - SHM_OUT_POOL_BUF_SIZE, > - ODP_CACHE_LINE_SIZE, > - ODP_BUFFER_TYPE_PACKET); > + out_pool = odp_buffer_pool_create("out_pool", ODP_SHM_NULL, ¶ms); > > if (ODP_BUFFER_POOL_INVALID == out_pool) { > EXAMPLE_ERR("Error: message pool create failed.\n"); > @@ -1176,12 +1171,12 @@ main(int argc, char *argv[]) > { > odph_linux_pthread_t thread_tbl[MAX_WORKERS]; > int num_workers; > - void *pool_base; > int i; > int first_core; > int core_count; > int stream_count; > odp_shm_t shm; > + odp_buffer_pool_param_t params; > > /* Init ODP before calling anything else */ > if (odp_init_global(NULL, NULL)) { > @@ -1241,42 +1236,28 @@ main(int argc, char *argv[]) > printf("First core: %i\n\n", first_core); > > /* Create packet buffer pool */ > - shm = odp_shm_reserve("shm_packet_pool", > - SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > + params.buf_align = 0; > + params.num_bufs = SHM_PKT_POOL_BUF_COUNT; > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > - pool_base = odp_shm_addr(shm); > - > - if (NULL == pool_base) { > - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); > - exit(EXIT_FAILURE); > - } > + pkt_pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, > + ¶ms); > > - pkt_pool = odp_buffer_pool_create("packet_pool", pool_base, > - SHM_PKT_POOL_SIZE, > - SHM_PKT_POOL_BUF_SIZE, > - ODP_CACHE_LINE_SIZE, > - ODP_BUFFER_TYPE_PACKET); > if (ODP_BUFFER_POOL_INVALID == pkt_pool) { > EXAMPLE_ERR("Error: packet pool create failed.\n"); > exit(EXIT_FAILURE); > } > > /* Create context buffer pool */ > - shm = odp_shm_reserve("shm_ctx_pool", > - SHM_CTX_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > - > - pool_base = odp_shm_addr(shm); > + params.buf_size = SHM_CTX_POOL_BUF_SIZE; > + params.buf_align = 0; > + params.num_bufs = SHM_CTX_POOL_BUF_COUNT; > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > - if (NULL == pool_base) { > - EXAMPLE_ERR("Error: context pool mem alloc failed.\n"); > - exit(EXIT_FAILURE); > - } > + ctx_pool = odp_buffer_pool_create("ctx_pool", ODP_SHM_NULL, > + ¶ms); > > - ctx_pool = odp_buffer_pool_create("ctx_pool", pool_base, > - SHM_CTX_POOL_SIZE, > - SHM_CTX_POOL_BUF_SIZE, > - ODP_CACHE_LINE_SIZE, > - ODP_BUFFER_TYPE_RAW); > if (ODP_BUFFER_POOL_INVALID == ctx_pool) { > EXAMPLE_ERR("Error: context pool create failed.\n"); > exit(EXIT_FAILURE); > diff --git a/example/l2fwd/odp_l2fwd.c b/example/l2fwd/odp_l2fwd.c > index ebac8c5..3c1fd6a 100644 > --- a/example/l2fwd/odp_l2fwd.c > +++ b/example/l2fwd/odp_l2fwd.c > @@ -314,12 +314,12 @@ int main(int argc, char *argv[]) > { > odph_linux_pthread_t thread_tbl[MAX_WORKERS]; > odp_buffer_pool_t pool; > - void *pool_base; > int i; > int first_core; > int core_count; > odp_pktio_t pktio; > odp_shm_t shm; > + odp_buffer_pool_param_t params; > > /* Init ODP before calling anything else */ > if (odp_init_global(NULL, NULL)) { > @@ -383,20 +383,13 @@ int main(int argc, char *argv[]) > printf("First core: %i\n\n", first_core); > > /* Create packet pool */ > - shm = odp_shm_reserve("shm_packet_pool", > - SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > - pool_base = odp_shm_addr(shm); > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > + params.buf_align = 0; > + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > - if (pool_base == NULL) { > - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); > - exit(EXIT_FAILURE); > - } > + pool = odp_buffer_pool_create("packet pool", ODP_SHM_NULL, ¶ms); > > - pool = odp_buffer_pool_create("packet_pool", pool_base, > - SHM_PKT_POOL_SIZE, > - SHM_PKT_POOL_BUF_SIZE, > - ODP_CACHE_LINE_SIZE, > - ODP_BUFFER_TYPE_PACKET); > if (pool == ODP_BUFFER_POOL_INVALID) { > EXAMPLE_ERR("Error: packet pool create failed.\n"); > exit(EXIT_FAILURE); > diff --git a/example/odp_example/odp_example.c b/example/odp_example/odp_example.c > index 96a2912..8373f12 100644 > --- a/example/odp_example/odp_example.c > +++ b/example/odp_example/odp_example.c > @@ -954,13 +954,13 @@ int main(int argc, char *argv[]) > test_args_t args; > int num_workers; > odp_buffer_pool_t pool; > - void *pool_base; > odp_queue_t queue; > int i, j; > int prios; > int first_core; > odp_shm_t shm; > test_globals_t *globals; > + odp_buffer_pool_param_t params; > > printf("\nODP example starts\n\n"); > > @@ -1042,19 +1042,13 @@ int main(int argc, char *argv[]) > /* > * Create message pool > */ > - shm = odp_shm_reserve("msg_pool", > - MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > - pool_base = odp_shm_addr(shm); > + params.buf_size = sizeof(test_message_t); > + params.buf_align = 0; > + params.num_bufs = MSG_POOL_SIZE/sizeof(test_message_t); > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > - if (pool_base == NULL) { > - EXAMPLE_ERR("Shared memory reserve failed.\n"); > - return -1; > - } > - > - pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, > - sizeof(test_message_t), > - ODP_CACHE_LINE_SIZE, ODP_BUFFER_TYPE_RAW); > + pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, ¶ms); > > if (pool == ODP_BUFFER_POOL_INVALID) { > EXAMPLE_ERR("Pool create failed.\n"); > diff --git a/example/packet/odp_pktio.c b/example/packet/odp_pktio.c > index 1763c84..27318d4 100644 > --- a/example/packet/odp_pktio.c > +++ b/example/packet/odp_pktio.c > @@ -331,11 +331,11 @@ int main(int argc, char *argv[]) > odph_linux_pthread_t thread_tbl[MAX_WORKERS]; > odp_buffer_pool_t pool; > int num_workers; > - void *pool_base; > int i; > int first_core; > int core_count; > odp_shm_t shm; > + odp_buffer_pool_param_t params; > > /* Init ODP before calling anything else */ > if (odp_init_global(NULL, NULL)) { > @@ -389,20 +389,13 @@ int main(int argc, char *argv[]) > printf("First core: %i\n\n", first_core); > > /* Create packet pool */ > - shm = odp_shm_reserve("shm_packet_pool", > - SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > - pool_base = odp_shm_addr(shm); > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > + params.buf_align = 0; > + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > - if (pool_base == NULL) { > - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); > - exit(EXIT_FAILURE); > - } > + pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, ¶ms); > > - pool = odp_buffer_pool_create("packet_pool", pool_base, > - SHM_PKT_POOL_SIZE, > - SHM_PKT_POOL_BUF_SIZE, > - ODP_CACHE_LINE_SIZE, > - ODP_BUFFER_TYPE_PACKET); > if (pool == ODP_BUFFER_POOL_INVALID) { > EXAMPLE_ERR("Error: packet pool create failed.\n"); > exit(EXIT_FAILURE); > diff --git a/example/timer/odp_timer_test.c b/example/timer/odp_timer_test.c > index 9968bfe..0d6e31a 100644 > --- a/example/timer/odp_timer_test.c > +++ b/example/timer/odp_timer_test.c > @@ -244,12 +244,12 @@ int main(int argc, char *argv[]) > test_args_t args; > int num_workers; > odp_buffer_pool_t pool; > - void *pool_base; > odp_queue_t queue; > int first_core; > uint64_t cycles, ns; > odp_queue_param_t param; > odp_shm_t shm; > + odp_buffer_pool_param_t params; > > printf("\nODP timer example starts\n"); > > @@ -313,12 +313,13 @@ int main(int argc, char *argv[]) > */ > shm = odp_shm_reserve("msg_pool", > MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > - pool_base = odp_shm_addr(shm); > > - pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, > - 0, > - ODP_CACHE_LINE_SIZE, > - ODP_BUFFER_TYPE_TIMEOUT); > + params.buf_size = 0; > + params.buf_align = 0; > + params.num_bufs = MSG_POOL_SIZE; > + params.buf_type = ODP_BUFFER_TYPE_TIMEOUT; > + > + pool = odp_buffer_pool_create("msg_pool", shm, ¶ms); > > if (pool == ODP_BUFFER_POOL_INVALID) { > EXAMPLE_ERR("Pool create failed.\n"); > diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h b/platform/linux-generic/include/api/odp_buffer_pool.h > index 30b83e0..7022daa 100644 > --- a/platform/linux-generic/include/api/odp_buffer_pool.h > +++ b/platform/linux-generic/include/api/odp_buffer_pool.h > @@ -36,32 +36,101 @@ extern "C" { > #define ODP_BUFFER_POOL_INVALID 0 > > /** > + * Buffer pool parameters > + * Used to communicate buffer pool creation options. > + */ > +typedef struct odp_buffer_pool_param_t { > + size_t buf_size; /**< Buffer size in bytes. The maximum > + number of bytes application will > + store in each buffer. */ > + size_t buf_align; /**< Minimum buffer alignment in bytes. > + Valid values are powers of two. Use 0 > + for default alignment. Default will > + always be a multiple of 8. */ > + uint32_t num_bufs; /**< Number of buffers in the pool */ > + int buf_type; /**< Buffer type */ > +} odp_buffer_pool_param_t; > + > +/** > * Create a buffer pool > + * This routine is used to create a buffer pool. It take three > + * arguments: the optional name of the pool to be created, an optional shared > + * memory handle, and a parameter struct that describes the pool to be > + * created. If a name is not specified the result is an anonymous pool that > + * cannot be referenced by odp_buffer_pool_lookup(). > * > - * @param name Name of the pool (max ODP_BUFFER_POOL_NAME_LEN - 1 chars) > - * @param base_addr Pool base address > - * @param size Pool size in bytes > - * @param buf_size Buffer size in bytes > - * @param buf_align Minimum buffer alignment > - * @param buf_type Buffer type > + * @param[in] name Name of the pool, max ODP_BUFFER_POOL_NAME_LEN-1 chars. > + * May be specified as NULL for anonymous pools. > * > - * @return Buffer pool handle > + * @param[in] shm The shared memory object in which to create the pool. > + * Use ODP_SHM_NULL to reserve default memory type > + * for the buffer type. > + * > + * @param[in] params Buffer pool parameters. > + * > + * @return Buffer pool handle or ODP_BUFFER_POOL_INVALID if call failed. > */ > + > odp_buffer_pool_t odp_buffer_pool_create(const char *name, > - void *base_addr, uint64_t size, > - size_t buf_size, size_t buf_align, > - int buf_type); > + odp_shm_t shm, > + odp_buffer_pool_param_t *params); > > +/** > + * Destroy a buffer pool previously created by odp_buffer_pool_create() > + * > + * @param[in] pool Handle of the buffer pool to be destroyed > + * > + * @return 0 on Success, -1 on Failure. > + * > + * @note This routine destroys a previously created buffer pool. This call > + * does not destroy any shared memory object passed to > + * odp_buffer_pool_create() used to store the buffer pool contents. The caller > + * takes responsibility for that. If no shared memory object was passed as > + * part of the create call, then this routine will destroy any internal shared > + * memory objects associated with the buffer pool. Results are undefined if > + * an attempt is made to destroy a buffer pool that contains allocated or > + * otherwise active buffers. > + */ > +int odp_buffer_pool_destroy(odp_buffer_pool_t pool); > > /** > * Find a buffer pool by name > * > - * @param name Name of the pool > + * @param[in] name Name of the pool > * > * @return Buffer pool handle, or ODP_BUFFER_POOL_INVALID if not found. > + * > + * @note This routine cannot be used to look up an anonymous pool (one created > + * with no name). > */ > odp_buffer_pool_t odp_buffer_pool_lookup(const char *name); > > +/** > + * Buffer pool information struct > + * Used to get information about a buffer pool. > + */ > +typedef struct odp_buffer_pool_info_t { > + const char *name; /**< pool name */ > + odp_buffer_pool_param_t params; /**< pool parameters */ > +} odp_buffer_pool_info_t; > + > +/** > + * Retrieve information about a buffer pool > + * > + * @param[in] pool Buffer pool handle > + * > + * @param[out] shm Recieves odp_shm_t supplied by caller at > + * pool creation, or ODP_SHM_NULL if the > + * pool is managed internally. > + * > + * @param[out] info Receives an odp_buffer_pool_info_t object > + * that describes the pool. > + * > + * @return 0 on success, -1 if info could not be retrieved. > + */ > + > +int odp_buffer_pool_info(odp_buffer_pool_t pool, odp_shm_t *shm, > + odp_buffer_pool_info_t *info); > > /** > * Print buffer pool info > diff --git a/platform/linux-generic/include/api/odp_config.h b/platform/linux-generic/include/api/odp_config.h > index 906897c..1226d37 100644 > --- a/platform/linux-generic/include/api/odp_config.h > +++ b/platform/linux-generic/include/api/odp_config.h > @@ -49,6 +49,16 @@ extern "C" { > #define ODP_CONFIG_PKTIO_ENTRIES 64 > > /** > + * Segment size to use - > + */ > +#define ODP_CONFIG_BUF_SEG_SIZE (512*3) > + > +/** > + * Maximum buffer size supported > + */ > +#define ODP_CONFIG_BUF_MAX_SIZE (ODP_CONFIG_BUF_SEG_SIZE*7) > + > +/** > * @} > */ > > diff --git a/platform/linux-generic/include/api/odp_platform_types.h b/platform/linux-generic/include/api/odp_platform_types.h > index 4db47d3..b9b3aea 100644 > --- a/platform/linux-generic/include/api/odp_platform_types.h > +++ b/platform/linux-generic/include/api/odp_platform_types.h > @@ -65,6 +65,15 @@ typedef uint32_t odp_pktio_t; > #define ODP_PKTIO_ANY ((odp_pktio_t)~0) > > /** > + * ODP shared memory block > + */ > +typedef uint32_t odp_shm_t; > + > +/** Invalid shared memory block */ > +#define ODP_SHM_INVALID 0 > +#define ODP_SHM_NULL ODP_SHM_INVALID /**< Synonym for buffer pool use */ > + > +/** > * @} > */ > > diff --git a/platform/linux-generic/include/api/odp_shared_memory.h b/platform/linux-generic/include/api/odp_shared_memory.h > index 26e208b..f70db5a 100644 > --- a/platform/linux-generic/include/api/odp_shared_memory.h > +++ b/platform/linux-generic/include/api/odp_shared_memory.h > @@ -20,6 +20,7 @@ extern "C" { > > > #include <odp_std_types.h> > +#include <odp_platform_types.h> > > /** @defgroup odp_shared_memory ODP SHARED MEMORY > * Operations on shared memory. > @@ -38,15 +39,6 @@ extern "C" { > #define ODP_SHM_PROC 0x2 /**< Share with external processes */ > > /** > - * ODP shared memory block > - */ > -typedef uint32_t odp_shm_t; > - > -/** Invalid shared memory block */ > -#define ODP_SHM_INVALID 0 > - > - > -/** > * Shared memory block info > */ > typedef struct odp_shm_info_t { > diff --git a/platform/linux-generic/include/odp_buffer_inlines.h b/platform/linux-generic/include/odp_buffer_inlines.h > new file mode 100644 > index 0000000..f33b41d > --- /dev/null > +++ b/platform/linux-generic/include/odp_buffer_inlines.h > @@ -0,0 +1,157 @@ > +/* Copyright (c) 2014, Linaro Limited > + * All rights reserved. > + * > + * SPDX-License-Identifier: BSD-3-Clause > + */ > + > +/** > + * @file > + * > + * Inline functions for ODP buffer mgmt routines - implementation internal > + */ > + > +#ifndef ODP_BUFFER_INLINES_H_ > +#define ODP_BUFFER_INLINES_H_ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +static inline odp_buffer_t odp_buffer_encode_handle(odp_buffer_hdr_t *hdr) > +{ > + odp_buffer_bits_t handle; > + uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl); > + struct pool_entry_s *pool = get_pool_entry(pool_id); > + > + handle.pool_id = pool_id; > + handle.index = ((uint8_t *)hdr - pool->pool_base_addr) / > + ODP_CACHE_LINE_SIZE; > + handle.seg = 0; > + > + return handle.u32; > +} > + > +static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) > +{ > + odp_buffer_t hdl = odp_buffer_encode_handle(hdr); > + if (hdl != hdr->handle.handle) { > + ODP_DBG("buf %p should have handle %x but is cached as %x\n", > + hdr, hdl, hdr->handle.handle); > + hdr->handle.handle = hdl; > + } > + return hdr->handle.handle; > +} > + > +static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > +{ > + odp_buffer_bits_t handle; > + uint32_t pool_id; > + uint32_t index; > + struct pool_entry_s *pool; > + > + handle.u32 = buf; > + pool_id = handle.pool_id; > + index = handle.index; > + > +#ifdef POOL_ERROR_CHECK > + if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { > + ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); > + return NULL; > + } > +#endif > + > + pool = get_pool_entry(pool_id); > + > +#ifdef POOL_ERROR_CHECK > + if (odp_unlikely(index > pool->params.num_bufs - 1)) { > + ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); > + return NULL; > + } > +#endif > + > + return (odp_buffer_hdr_t *)(void *) > + (pool->pool_base_addr + (index * ODP_CACHE_LINE_SIZE)); > +} > + > +static inline uint32_t odp_buffer_refcount(odp_buffer_hdr_t *buf) > +{ > + return odp_atomic_load_u32(&buf->ref_count); > +} > + > +static inline uint32_t odp_buffer_incr_refcount(odp_buffer_hdr_t *buf, > + uint32_t val) > +{ > + return odp_atomic_fetch_add_u32(&buf->ref_count, val) + val; > +} > + > +static inline uint32_t odp_buffer_decr_refcount(odp_buffer_hdr_t *buf, > + uint32_t val) > +{ > + uint32_t tmp; > + > + tmp = odp_atomic_fetch_sub_u32(&buf->ref_count, val); > + > + if (tmp < val) { > + odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); > + return 0; > + } else { > + return tmp - val; > + } > +} > + > +static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) > +{ > + odp_buffer_bits_t handle; > + odp_buffer_hdr_t *buf_hdr; > + handle.u32 = buf; > + > + /* For buffer handles, segment index must be 0 */ > + if (handle.seg != 0) > + return NULL; > + > + pool_entry_t *pool = odp_pool_to_entry(handle.pool_id); > + > + /* If pool not created, handle is invalid */ > + if (pool->s.pool_shm == ODP_SHM_INVALID) > + return NULL; > + > + uint32_t buf_stride = pool->s.buf_stride / ODP_CACHE_LINE_SIZE; > + > + /* A valid buffer index must be on stride, and must be in range */ > + if ((handle.index % buf_stride != 0) || > + ((uint32_t)(handle.index / buf_stride) >= pool->s.params.num_bufs)) > + return NULL; > + > + buf_hdr = (odp_buffer_hdr_t *)(void *) > + (pool->s.pool_base_addr + > + (handle.index * ODP_CACHE_LINE_SIZE)); > + > + /* Handle is valid, so buffer is valid if it is allocated */ > + if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) > + return NULL; > + else > + return buf_hdr; > +} > + > +int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > + > +static inline void *buffer_map(odp_buffer_hdr_t *buf, > + size_t offset, > + size_t *seglen, > + size_t limit) > +{ > + int seg_index = offset / buf->segsize; We are currently discussing the use of headroom/tailroom per segments if that is the case then we cannot assume the seg_index directly using the above formula > + int seg_offset = offset % buf->segsize; > + size_t buf_left = limit - offset; Maybe we need the error check for buf->total_size > offset > + > + *seglen = buf_left < buf->segsize ?w > + buf_left : buf->segsize - seg_offset; > + > + return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]); > +} > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif > diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h > index 0027bfc..29666db 100644 > --- a/platform/linux-generic/include/odp_buffer_internal.h > +++ b/platform/linux-generic/include/odp_buffer_internal.h > @@ -24,99 +24,118 @@ extern "C" { > #include <odp_buffer.h> > #include <odp_debug.h> > #include <odp_align.h> > - > -/* TODO: move these to correct files */ > - > -typedef uint64_t odp_phys_addr_t; > - > -#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > -#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > - > -#define ODP_BUFS_PER_CHUNK 16 > -#define ODP_BUFS_PER_SCATTER 4 > - > -#define ODP_BUFFER_TYPE_CHUNK 0xffff > - > +#include <odp_config.h> > +#include <odp_byteorder.h> > +#include <odp_thread.h> > + > + > +#define ODP_BUFFER_MAX_SEG (ODP_CONFIG_BUF_MAX_SIZE/ODP_CONFIG_BUF_SEG_SIZE) > +#define ODP_MAX_INLINE_BUF (sizeof(void *) * (ODP_BUFFER_MAX_SEG - 1)) > + > +ODP_STATIC_ASSERT((ODP_CONFIG_BUF_SEG_SIZE % ODP_CACHE_LINE_SIZE) == 0, > + "ODP Segment size must be a multiple of cache line size"); > + > +#define ODP_SEGBITS(x) \ > + ((x) < 2 ? 1 : \ > + ((x) < 4 ? 2 : \ > + ((x) < 8 ? 3 : \ > + ((x) < 16 ? 4 : \ > + ((x) < 32 ? 5 : \ > + ((x) < 64 ? 6 : \ > + ((x) < 128 ? 7 : \ > + ((x) < 256 ? 8 : \ > + ((x) < 512 ? 9 : \ > + ((x) < 1024 ? 10 : \ > + ((x) < 2048 ? 11 : \ > + ((x) < 4096 ? 12 : \ > + (0/0))))))))))))) > + > +ODP_STATIC_ASSERT(ODP_SEGBITS(ODP_BUFFER_MAX_SEG) < > + ODP_SEGBITS(ODP_CACHE_LINE_SIZE), > + "Number of segments must not exceed log of cache line size"); > > #define ODP_BUFFER_POOL_BITS 4 > -#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS) > +#define ODP_BUFFER_SEG_BITS ODP_SEGBITS(ODP_CACHE_LINE_SIZE) > +#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS - ODP_BUFFER_SEG_BITS) > +#define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS + ODP_BUFFER_INDEX_BITS) > #define ODP_BUFFER_MAX_POOLS (1 << ODP_BUFFER_POOL_BITS) > #define ODP_BUFFER_MAX_BUFFERS (1 << ODP_BUFFER_INDEX_BITS) > > +#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > +#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > + > typedef union odp_buffer_bits_t { > uint32_t u32; > odp_buffer_t handle; > > struct { > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > uint32_t pool_id:ODP_BUFFER_POOL_BITS; > uint32_t index:ODP_BUFFER_INDEX_BITS; > + uint32_t seg:ODP_BUFFER_SEG_BITS; > +#else > + uint32_t seg:ODP_BUFFER_SEG_BITS; > + uint32_t index:ODP_BUFFER_INDEX_BITS; > + uint32_t pool_id:ODP_BUFFER_POOL_BITS; > +#endif > }; > -} odp_buffer_bits_t; > > + struct { > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > +#else > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > +#endif > + }; > +} odp_buffer_bits_t; > > /* forward declaration */ > struct odp_buffer_hdr_t; > > - > -/* > - * Scatter/gather list of buffers > - */ > -typedef struct odp_buffer_scatter_t { > - /* buffer pointers */ > - struct odp_buffer_hdr_t *buf[ODP_BUFS_PER_SCATTER]; > - int num_bufs; /* num buffers */ > - int pos; /* position on the list */ > - size_t total_len; /* Total length */ > -} odp_buffer_scatter_t; > - > - > -/* > - * Chunk of buffers (in single pool) > - */ > -typedef struct odp_buffer_chunk_t { > - uint32_t num_bufs; /* num buffers */ > - uint32_t buf_index[ODP_BUFS_PER_CHUNK]; /* buffers */ > -} odp_buffer_chunk_t; > - > - > /* Common buffer header */ > typedef struct odp_buffer_hdr_t { > struct odp_buffer_hdr_t *next; /* next buf in a list */ > + int allocator; /* allocating thread id */ > odp_buffer_bits_t handle; /* handle */ > - odp_phys_addr_t phys_addr; /* physical data start address */ > - void *addr; /* virtual data start address */ > - uint32_t index; /* buf index in the pool */ > + union { > + uint32_t all; > + struct { > + uint32_t zeroized:1; /* Zeroize buf data on free */ > + uint32_t hdrdata:1; /* Data is in buffer hdr */ > + }; > + } flags; > + int type; /* buffer type */ > size_t size; /* max data size */ > - size_t cur_offset; /* current offset */ > odp_atomic_u32_t ref_count; /* reference count */ > - odp_buffer_scatter_t scatter; /* Scatter/gather list */ > - int type; /* type of next header */ > odp_buffer_pool_t pool_hdl; /* buffer pool handle */ > - > + union { > + void *buf_ctx; /* user context */ > + void *udata_addr; /* user metadata addr */ > + }; > + size_t udata_size; /* size of user metadata */ > + uint32_t segcount; /* segment count */ > + uint32_t segsize; /* segment size */ > + void *addr[ODP_BUFFER_MAX_SEG]; /* block addrs */ > } odp_buffer_hdr_t; > > -/* Ensure next header starts from 8 byte align */ > -ODP_STATIC_ASSERT((sizeof(odp_buffer_hdr_t) % 8) == 0, "ODP_BUFFER_HDR_T__SIZE_ERROR"); > +typedef struct odp_buffer_hdr_stride { > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t))]; > +} odp_buffer_hdr_stride; > > +typedef struct odp_buf_blk_t { > + struct odp_buf_blk_t *next; > + struct odp_buf_blk_t *prev; > +} odp_buf_blk_t; > > /* Raw buffer header */ > typedef struct { > odp_buffer_hdr_t buf_hdr; /* common buffer header */ > - uint8_t buf_data[]; /* start of buffer data area */ > } odp_raw_buffer_hdr_t; > > - > -/* Chunk header */ > -typedef struct odp_buffer_chunk_hdr_t { > - odp_buffer_hdr_t buf_hdr; > - odp_buffer_chunk_t chunk; > -} odp_buffer_chunk_hdr_t; > - > - > -int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > - > -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t buf_src); > - > +/* Forward declarations */ > +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool, size_t size); > > #ifdef __cplusplus > } > diff --git a/platform/linux-generic/include/odp_buffer_pool_internal.h b/platform/linux-generic/include/odp_buffer_pool_internal.h > index e0210bd..cd58f91 100644 > --- a/platform/linux-generic/include/odp_buffer_pool_internal.h > +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h > @@ -25,6 +25,35 @@ extern "C" { > #include <odp_hints.h> > #include <odp_config.h> > #include <odp_debug.h> > +#include <odp_shared_memory.h> > +#include <odp_atomic.h> > +#include <odp_atomic_internal.h> > +#include <string.h> > + > +/** > + * Buffer initialization routine prototype > + * > + * @note Routines of this type MAY be passed as part of the > + * _odp_buffer_pool_init_t structure to be called whenever a > + * buffer is allocated to initialize the user metadata > + * associated with that buffer. > + */ > +typedef void (_odp_buf_init_t)(odp_buffer_t buf, void *buf_init_arg); > + > +/** > + * Buffer pool initialization parameters > + * > + * @param[in] udata_size Size of the user metadata for each buffer > + * @param[in] buf_init Function pointer to be called to initialize the > + * user metadata for each buffer in the pool. > + * @param[in] buf_init_arg Argument to be passed to buf_init(). > + * > + */ > +typedef struct _odp_buffer_pool_init_t { > + size_t udata_size; /**< Size of user metadata for each buffer */ > + _odp_buf_init_t *buf_init; /**< Buffer initialization routine to use */ > + void *buf_init_arg; /**< Argument to be passed to buf_init() */ > +} _odp_buffer_pool_init_t; /**< Type of buffer initialization struct */ > > /* Use ticketlock instead of spinlock */ > #define POOL_USE_TICKETLOCK > @@ -39,6 +68,17 @@ extern "C" { > #include <odp_spinlock.h> > #endif > > +#ifdef POOL_USE_TICKETLOCK > +#include <odp_ticketlock.h> > +#define LOCK(a) odp_ticketlock_lock(a) > +#define UNLOCK(a) odp_ticketlock_unlock(a) > +#define LOCK_INIT(a) odp_ticketlock_init(a) > +#else > +#include <odp_spinlock.h> > +#define LOCK(a) odp_spinlock_lock(a) > +#define UNLOCK(a) odp_spinlock_unlock(a) > +#define LOCK_INIT(a) odp_spinlock_init(a) > +#endif > > struct pool_entry_s { > #ifdef POOL_USE_TICKETLOCK > @@ -47,66 +87,224 @@ struct pool_entry_s { > odp_spinlock_t lock ODP_ALIGNED_CACHE; > #endif > > - odp_buffer_chunk_hdr_t *head; > - uint64_t free_bufs; > char name[ODP_BUFFER_POOL_NAME_LEN]; > - > - odp_buffer_pool_t pool_hdl ODP_ALIGNED_CACHE; > - uintptr_t buf_base; > - size_t buf_size; > - size_t buf_offset; > - uint64_t num_bufs; > - void *pool_base_addr; > - uint64_t pool_size; > - size_t user_size; > - size_t user_align; > - int buf_type; > - size_t hdr_size; > + odp_buffer_pool_param_t params; > + _odp_buffer_pool_init_t init_params; > + odp_buffer_pool_t pool_hdl; > + odp_shm_t pool_shm; > + union { > + uint32_t all; > + struct { > + uint32_t has_name:1; > + uint32_t user_supplied_shm:1; > + uint32_t unsegmented:1; > + uint32_t zeroized:1; > + uint32_t quiesced:1; > + uint32_t low_wm_assert:1; > + uint32_t predefined:1; > + }; > + } flags; > + uint8_t *pool_base_addr; > + size_t pool_size; > + uint32_t buf_stride; > + _odp_atomic_ptr_t buf_freelist; Minor: Consider renaming it as seg_freelist as pool is a collection of segments and Buffer is a logical term. > + _odp_atomic_ptr_t blk_freelist; > + odp_atomic_u32_t bufcount; > + odp_atomic_u32_t blkcount; > + odp_atomic_u64_t bufallocs; > + odp_atomic_u64_t buffrees; > + odp_atomic_u64_t blkallocs; > + odp_atomic_u64_t blkfrees; > + odp_atomic_u64_t bufempty; > + odp_atomic_u64_t blkempty; > + odp_atomic_u64_t high_wm_count; > + odp_atomic_u64_t low_wm_count; > + size_t seg_size; > + size_t high_wm; > + size_t low_wm; > + size_t headroom; > + size_t tailroom; > }; > > +typedef union pool_entry_u { > + struct pool_entry_s s; > + > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct pool_entry_s))]; > +} pool_entry_t; > > extern void *pool_entry_ptr[]; > > +#if defined(ODP_CONFIG_SECURE_POOLS) && (ODP_CONFIG_SECURE_POOLS == 1) > +#define buffer_is_secure(buf) (buf->flags.zeroized) > +#define pool_is_secure(pool) (pool->flags.zeroized) > +#else > +#define buffer_is_secure(buf) 0 > +#define pool_is_secure(pool) 0 > +#endif > + > +#define TAG_ALIGN ((size_t)16) > > -static inline void *get_pool_entry(uint32_t pool_id) > +#define odp_cs(ptr, old, new) \ > + _odp_atomic_ptr_cmp_xchg_strong(&ptr, (void **)&old, (void *)new, \ > + _ODP_MEMMODEL_SC, \ > + _ODP_MEMMODEL_SC) > + > +/* Helper functions for pointer tagging to avoid ABA race conditions */ > +#define odp_tag(ptr) \ > + (((size_t)ptr) & (TAG_ALIGN - 1)) > + > +#define odp_detag(ptr) \ > + ((typeof(ptr))(((size_t)ptr) & -TAG_ALIGN)) > + > +#define odp_retag(ptr, tag) \ > + ((typeof(ptr))(((size_t)ptr) | odp_tag(tag))) > + > + > +static inline void *get_blk(struct pool_entry_s *pool) > { > - return pool_entry_ptr[pool_id]; > + void *oldhead, *myhead, *newhead; > + > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, _ODP_MEMMODEL_ACQ); > + > + do { > + size_t tag = odp_tag(oldhead); > + myhead = odp_detag(oldhead); > + if (myhead == NULL) > + break; > + newhead = odp_retag(((odp_buf_blk_t *)myhead)->next, tag + 1); > + } while (odp_cs(pool->blk_freelist, oldhead, newhead) == 0); > + > + if (myhead == NULL) { > + odp_atomic_inc_u64(&pool->blkempty); > + } else { > + uint64_t blkcount = > + odp_atomic_fetch_sub_u32(&pool->blkcount, 1); > + > + /* Check for low watermark condition */ > + if (blkcount == pool->low_wm) { > + LOCK(&pool->lock); > + if (blkcount <= pool->low_wm && > + !pool->flags.low_wm_assert) { > + pool->flags.low_wm_assert = 1; > + odp_atomic_inc_u64(&pool->low_wm_count); > + } > + UNLOCK(&pool->lock); > + } > + odp_atomic_inc_u64(&pool->blkallocs); > + } > + > + return (void *)myhead; > } > > +static inline void ret_blk(struct pool_entry_s *pool, void *block) > +{ > + void *oldhead, *myhead, *myblock; > + > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, _ODP_MEMMODEL_ACQ); > > -static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > + do { > + size_t tag = odp_tag(oldhead); > + myhead = odp_detag(oldhead); > + ((odp_buf_blk_t *)block)->next = myhead; > + myblock = odp_retag(block, tag + 1); > + } while (odp_cs(pool->blk_freelist, oldhead, myblock) == 0); > + > + odp_atomic_inc_u64(&pool->blkfrees); > + uint64_t blkcount = odp_atomic_fetch_add_u32(&pool->blkcount, 1); > + > + /* Check if low watermark condition should be deasserted */ > + if (blkcount == pool->high_wm) { > + LOCK(&pool->lock); > + if (blkcount == pool->high_wm && pool->flags.low_wm_assert) { > + pool->flags.low_wm_assert = 0; > + odp_atomic_inc_u64(&pool->high_wm_count); > + } > + UNLOCK(&pool->lock); > + } > +} > + > +static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool) > { > - odp_buffer_bits_t handle; > - uint32_t pool_id; > - uint32_t index; > - struct pool_entry_s *pool; > - odp_buffer_hdr_t *hdr; > - > - handle.u32 = buf; > - pool_id = handle.pool_id; > - index = handle.index; > - > -#ifdef POOL_ERROR_CHECK > - if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { > - ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); > - return NULL; > + odp_buffer_hdr_t *oldhead, *myhead, *newhead; > + > + oldhead = _odp_atomic_ptr_load(&pool->buf_freelist, _ODP_MEMMODEL_ACQ); > + > + do { > + size_t tag = odp_tag(oldhead); > + myhead = odp_detag(oldhead); > + if (myhead == NULL) > + break; > + newhead = odp_retag(myhead->next, tag + 1); > + } while (odp_cs(pool->buf_freelist, oldhead, newhead) == 0); > + > + if (myhead != NULL) { > + myhead->next = myhead; > + myhead->allocator = odp_thread_id(); > + odp_atomic_inc_u32(&pool->bufcount); > + odp_atomic_inc_u64(&pool->bufallocs); > + } else { > + odp_atomic_inc_u64(&pool->bufempty); > } > -#endif > > - pool = get_pool_entry(pool_id); > + return (void *)myhead; > +} > + > +static inline void ret_buf(struct pool_entry_s *pool, odp_buffer_hdr_t *buf) > +{ > + odp_buffer_hdr_t *oldhead, *myhead, *mybuf; > > -#ifdef POOL_ERROR_CHECK > - if (odp_unlikely(index > pool->num_bufs - 1)) { > - ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); > - return NULL; > + if (!buf->flags.hdrdata && buf->type != ODP_BUFFER_TYPE_RAW) { > + while (buf->segcount > 0) { > + if (buffer_is_secure(buf) || pool_is_secure(pool)) > + memset(buf->addr[buf->segcount - 1], > + 0, buf->segsize); > + ret_blk(pool, buf->addr[--buf->segcount]); > + } > + buf->size = 0; > } > -#endif > > - hdr = (odp_buffer_hdr_t *)(pool->buf_base + index * pool->buf_size); > + oldhead = _odp_atomic_ptr_load(&pool->buf_freelist, _ODP_MEMMODEL_ACQ); > > - return hdr; > + do { > + size_t tag = odp_tag(oldhead); > + myhead = odp_detag(oldhead); > + buf->next = myhead; > + mybuf = odp_retag(buf, tag + 1); > + } while (odp_cs(pool->buf_freelist, oldhead, mybuf) == 0); > + > + odp_atomic_dec_u32(&pool->bufcount); > + odp_atomic_inc_u64(&pool->buffrees); > +} > + > +static inline odp_buffer_pool_t pool_index_to_handle(uint32_t pool_id) > +{ > + return pool_id + 1; > } > > +static inline uint32_t pool_handle_to_index(odp_buffer_pool_t pool_hdl) > +{ > + return pool_hdl - 1; > +} > + > +static inline void *get_pool_entry(uint32_t pool_id) > +{ > + return pool_entry_ptr[pool_id]; > +} > + > +static inline pool_entry_t *odp_pool_to_entry(odp_buffer_pool_t pool) > +{ > + return (pool_entry_t *)get_pool_entry(pool_handle_to_index(pool)); > +} > + > +static inline pool_entry_t *odp_buf_to_pool(odp_buffer_hdr_t *buf) > +{ > + return odp_pool_to_entry(buf->pool_hdl); > +} > + > +static inline size_t odp_buffer_pool_segment_size(odp_buffer_pool_t pool) > +{ > + return odp_pool_to_entry(pool)->s.seg_size; > +} > > #ifdef __cplusplus > } > diff --git a/platform/linux-generic/include/odp_packet_internal.h b/platform/linux-generic/include/odp_packet_internal.h > index 49c59b2..f34a83d 100644 > --- a/platform/linux-generic/include/odp_packet_internal.h > +++ b/platform/linux-generic/include/odp_packet_internal.h > @@ -22,6 +22,7 @@ extern "C" { > #include <odp_debug.h> > #include <odp_buffer_internal.h> > #include <odp_buffer_pool_internal.h> > +#include <odp_buffer_inlines.h> > #include <odp_packet.h> > #include <odp_packet_io.h> > > @@ -92,7 +93,8 @@ typedef union { > }; > } output_flags_t; > > -ODP_STATIC_ASSERT(sizeof(output_flags_t) == sizeof(uint32_t), "OUTPUT_FLAGS_SIZE_ERROR"); > +ODP_STATIC_ASSERT(sizeof(output_flags_t) == sizeof(uint32_t), > + "OUTPUT_FLAGS_SIZE_ERROR"); > > /** > * Internal Packet header > @@ -105,25 +107,23 @@ typedef struct { > error_flags_t error_flags; > output_flags_t output_flags; > > - uint32_t frame_offset; /**< offset to start of frame, even on error */ > uint32_t l2_offset; /**< offset to L2 hdr, e.g. Eth */ > uint32_t l3_offset; /**< offset to L3 hdr, e.g. IPv4, IPv6 */ > uint32_t l4_offset; /**< offset to L4 hdr (TCP, UDP, SCTP, also ICMP) */ > > uint32_t frame_len; > + uint32_t headroom; > + uint32_t tailroom; > > uint64_t user_ctx; /* user context */ > > odp_pktio_t input; > - > - uint32_t pad; > - uint8_t buf_data[]; /* start of buffer data area */ > } odp_packet_hdr_t; > > -ODP_STATIC_ASSERT(sizeof(odp_packet_hdr_t) == ODP_OFFSETOF(odp_packet_hdr_t, buf_data), > - "ODP_PACKET_HDR_T__SIZE_ERR"); > -ODP_STATIC_ASSERT(sizeof(odp_packet_hdr_t) % sizeof(uint64_t) == 0, > - "ODP_PACKET_HDR_T__SIZE_ERR2"); > +typedef struct odp_packet_hdr_stride { > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_packet_hdr_t))]; > +} odp_packet_hdr_stride; > + > > /** > * Return the packet header > @@ -138,6 +138,38 @@ static inline odp_packet_hdr_t *odp_packet_hdr(odp_packet_t pkt) > */ > void odp_packet_parse(odp_packet_t pkt, size_t len, size_t l2_offset); > > +/** > + * Initialize packet buffer > + */ > +static inline void packet_init(pool_entry_t *pool, > + odp_packet_hdr_t *pkt_hdr, > + size_t size) > +{ > + /* > + * Reset parser metadata. Note that we clear via memset to make > + * this routine indepenent of any additional adds to packet metadata. > + */ > + const size_t start_offset = ODP_FIELD_SIZEOF(odp_packet_hdr_t, buf_hdr); > + uint8_t *start; > + size_t len; > + > + start = (uint8_t *)pkt_hdr + start_offset; > + len = sizeof(odp_packet_hdr_t) - start_offset; > + memset(start, 0, len); > + > + /* > + * Packet headroom is set from the pool's headroom > + * Packet tailroom is rounded up to fill the last > + * segment occupied by the allocated length. > + */ > + pkt_hdr->frame_len = size; > + pkt_hdr->headroom = pool->s.headroom; > + pkt_hdr->tailroom = > + (pool->s.seg_size * pkt_hdr->buf_hdr.segcount) - > + (pool->s.headroom + size); > +} > + > + > #ifdef __cplusplus > } > #endif > diff --git a/platform/linux-generic/include/odp_timer_internal.h b/platform/linux-generic/include/odp_timer_internal.h > index ad28f53..2ff36ce 100644 > --- a/platform/linux-generic/include/odp_timer_internal.h > +++ b/platform/linux-generic/include/odp_timer_internal.h > @@ -51,14 +51,9 @@ typedef struct odp_timeout_hdr_t { > uint8_t buf_data[]; > } odp_timeout_hdr_t; > > - > - > -ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) == > - ODP_OFFSETOF(odp_timeout_hdr_t, buf_data), > - "ODP_TIMEOUT_HDR_T__SIZE_ERR"); > - > -ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) % sizeof(uint64_t) == 0, > - "ODP_TIMEOUT_HDR_T__SIZE_ERR2"); > +typedef struct odp_timeout_hdr_stride { > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_timeout_hdr_t))]; > +} odp_timeout_hdr_stride; > > > /** > diff --git a/platform/linux-generic/odp_buffer.c b/platform/linux-generic/odp_buffer.c > index bcbb99a..366190c 100644 > --- a/platform/linux-generic/odp_buffer.c > +++ b/platform/linux-generic/odp_buffer.c > @@ -5,8 +5,9 @@ > */ > > #include <odp_buffer.h> > -#include <odp_buffer_internal.h> > #include <odp_buffer_pool_internal.h> > +#include <odp_buffer_internal.h> > +#include <odp_buffer_inlines.h> > > #include <string.h> > #include <stdio.h> > @@ -16,7 +17,7 @@ void *odp_buffer_addr(odp_buffer_t buf) > { > odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf); > > - return hdr->addr; > + return hdr->addr[0]; > } > > > @@ -38,11 +39,7 @@ int odp_buffer_type(odp_buffer_t buf) > > int odp_buffer_is_valid(odp_buffer_t buf) > { > - odp_buffer_bits_t handle; > - > - handle.u32 = buf; > - > - return (handle.index != ODP_BUFFER_INVALID_INDEX); > + return validate_buf(buf) != NULL; > } > > > @@ -63,28 +60,14 @@ int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf) > len += snprintf(&str[len], n-len, > " pool %i\n", hdr->pool_hdl); > len += snprintf(&str[len], n-len, > - " index %"PRIu32"\n", hdr->index); > - len += snprintf(&str[len], n-len, > - " phy_addr %"PRIu64"\n", hdr->phys_addr); > - len += snprintf(&str[len], n-len, > " addr %p\n", hdr->addr); > len += snprintf(&str[len], n-len, > " size %zu\n", hdr->size); > len += snprintf(&str[len], n-len, > - " cur_offset %zu\n", hdr->cur_offset); > - len += snprintf(&str[len], n-len, > " ref_count %i\n", > odp_atomic_load_u32(&hdr->ref_count)); > len += snprintf(&str[len], n-len, > " type %i\n", hdr->type); > - len += snprintf(&str[len], n-len, > - " Scatter list\n"); > - len += snprintf(&str[len], n-len, > - " num_bufs %i\n", hdr->scatter.num_bufs); > - len += snprintf(&str[len], n-len, > - " pos %i\n", hdr->scatter.pos); > - len += snprintf(&str[len], n-len, > - " total_len %zu\n", hdr->scatter.total_len); > > return len; > } > @@ -101,9 +84,3 @@ void odp_buffer_print(odp_buffer_t buf) > > ODP_PRINT("\n%s\n", str); > } > - > -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t buf_src) > -{ > - (void)buf_dst; > - (void)buf_src; > -} > diff --git a/platform/linux-generic/odp_buffer_pool.c b/platform/linux-generic/odp_buffer_pool.c > index 6a0a6b2..f545090 100644 > --- a/platform/linux-generic/odp_buffer_pool.c > +++ b/platform/linux-generic/odp_buffer_pool.c > @@ -6,8 +6,9 @@ > > #include <odp_std_types.h> > #include <odp_buffer_pool.h> > -#include <odp_buffer_pool_internal.h> > #include <odp_buffer_internal.h> > +#include <odp_buffer_pool_internal.h> > +#include <odp_buffer_inlines.h> > #include <odp_packet_internal.h> > #include <odp_timer_internal.h> > #include <odp_shared_memory.h> > @@ -16,57 +17,35 @@ > #include <odp_config.h> > #include <odp_hints.h> > #include <odp_debug.h> > +#include <odp_atomic_internal.h> > > #include <string.h> > #include <stdlib.h> > > > -#ifdef POOL_USE_TICKETLOCK > -#include <odp_ticketlock.h> > -#define LOCK(a) odp_ticketlock_lock(a) > -#define UNLOCK(a) odp_ticketlock_unlock(a) > -#define LOCK_INIT(a) odp_ticketlock_init(a) > -#else > -#include <odp_spinlock.h> > -#define LOCK(a) odp_spinlock_lock(a) > -#define UNLOCK(a) odp_spinlock_unlock(a) > -#define LOCK_INIT(a) odp_spinlock_init(a) > -#endif > - > - > #if ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS > #error ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS > #endif > > -#define NULL_INDEX ((uint32_t)-1) > > -union buffer_type_any_u { > +typedef union buffer_type_any_u { > odp_buffer_hdr_t buf; > odp_packet_hdr_t pkt; > odp_timeout_hdr_t tmo; > -}; > - > -ODP_STATIC_ASSERT((sizeof(union buffer_type_any_u) % 8) == 0, > - "BUFFER_TYPE_ANY_U__SIZE_ERR"); > +} odp_anybuf_t; > > /* Any buffer type header */ > typedef struct { > union buffer_type_any_u any_hdr; /* any buffer type */ > - uint8_t buf_data[]; /* start of buffer data area */ > } odp_any_buffer_hdr_t; > > - > -typedef union pool_entry_u { > - struct pool_entry_s s; > - > - uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct pool_entry_s))]; > - > -} pool_entry_t; > +typedef struct odp_any_hdr_stride { > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_any_buffer_hdr_t))]; > +} odp_any_hdr_stride; > > > typedef struct pool_table_t { > pool_entry_t pool[ODP_CONFIG_BUFFER_POOLS]; > - > } pool_table_t; > > > @@ -77,38 +56,6 @@ static pool_table_t *pool_tbl; > void *pool_entry_ptr[ODP_CONFIG_BUFFER_POOLS]; > > > -static __thread odp_buffer_chunk_hdr_t *local_chunk[ODP_CONFIG_BUFFER_POOLS]; > - > - > -static inline odp_buffer_pool_t pool_index_to_handle(uint32_t pool_id) > -{ > - return pool_id + 1; > -} > - > - > -static inline uint32_t pool_handle_to_index(odp_buffer_pool_t pool_hdl) > -{ > - return pool_hdl -1; > -} > - > - > -static inline void set_handle(odp_buffer_hdr_t *hdr, > - pool_entry_t *pool, uint32_t index) > -{ > - odp_buffer_pool_t pool_hdl = pool->s.pool_hdl; > - uint32_t pool_id = pool_handle_to_index(pool_hdl); > - > - if (pool_id >= ODP_CONFIG_BUFFER_POOLS) > - ODP_ABORT("set_handle: Bad pool handle %u\n", pool_hdl); > - > - if (index > ODP_BUFFER_MAX_INDEX) > - ODP_ERR("set_handle: Bad buffer index\n"); > - > - hdr->handle.pool_id = pool_id; > - hdr->handle.index = index; > -} > - > - > int odp_buffer_pool_init_global(void) > { > uint32_t i; > @@ -142,269 +89,244 @@ int odp_buffer_pool_init_global(void) > return 0; > } > > +/** > + * Buffer pool creation > + */ > > -static odp_buffer_hdr_t *index_to_hdr(pool_entry_t *pool, uint32_t index) > -{ > - odp_buffer_hdr_t *hdr; > - > - hdr = (odp_buffer_hdr_t *)(pool->s.buf_base + index * pool->s.buf_size); > - return hdr; > -} > - > - > -static void add_buf_index(odp_buffer_chunk_hdr_t *chunk_hdr, uint32_t index) > -{ > - uint32_t i = chunk_hdr->chunk.num_bufs; > - chunk_hdr->chunk.buf_index[i] = index; > - chunk_hdr->chunk.num_bufs++; > -} > - > - > -static uint32_t rem_buf_index(odp_buffer_chunk_hdr_t *chunk_hdr) > +odp_buffer_pool_t odp_buffer_pool_create(const char *name, > + odp_shm_t shm, > + odp_buffer_pool_param_t *params) > { > - uint32_t index; > + odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID; > + pool_entry_t *pool; > uint32_t i; > > - i = chunk_hdr->chunk.num_bufs - 1; > - index = chunk_hdr->chunk.buf_index[i]; > - chunk_hdr->chunk.num_bufs--; > - return index; > -} > - > - > -static odp_buffer_chunk_hdr_t *next_chunk(pool_entry_t *pool, > - odp_buffer_chunk_hdr_t *chunk_hdr) > -{ > - uint32_t index; > - > - index = chunk_hdr->chunk.buf_index[ODP_BUFS_PER_CHUNK-1]; > - if (index == NULL_INDEX) > - return NULL; > - else > - return (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, index); > -} > - > - > -static odp_buffer_chunk_hdr_t *rem_chunk(pool_entry_t *pool) > -{ > - odp_buffer_chunk_hdr_t *chunk_hdr; > - > - chunk_hdr = pool->s.head; > - if (chunk_hdr == NULL) { > - /* Pool is empty */ > - return NULL; > - } > - > - pool->s.head = next_chunk(pool, chunk_hdr); > - pool->s.free_bufs -= ODP_BUFS_PER_CHUNK; > + /* Default initialization paramters */ > + static _odp_buffer_pool_init_t default_init_params = { > + .udata_size = 0, > + .buf_init = NULL, > + .buf_init_arg = NULL, > + }; > > - /* unlink */ > - rem_buf_index(chunk_hdr); > - return chunk_hdr; > -} > + _odp_buffer_pool_init_t *init_params = &default_init_params; > > + if (params == NULL) > + return ODP_BUFFER_POOL_INVALID; > > -static void add_chunk(pool_entry_t *pool, odp_buffer_chunk_hdr_t *chunk_hdr) > -{ > - if (pool->s.head) /* link pool head to the chunk */ > - add_buf_index(chunk_hdr, pool->s.head->buf_hdr.index); > - else > - add_buf_index(chunk_hdr, NULL_INDEX); > + /* Restriction for v1.0: All buffers are unsegmented */ > + const int unsegmented = 1; > > - pool->s.head = chunk_hdr; > - pool->s.free_bufs += ODP_BUFS_PER_CHUNK; > -} > + /* Restriction for v1.0: No zeroization support */ > + const int zeroized = 0; > > + /* Restriction for v1.0: No udata support */ > + uint32_t udata_stride = (init_params->udata_size > sizeof(void *)) ? > + ODP_CACHE_LINE_SIZE_ROUNDUP(init_params->udata_size) : > + 0; > > -static void check_align(pool_entry_t *pool, odp_buffer_hdr_t *hdr) > -{ > - if (!ODP_ALIGNED_CHECK_POWER_2(hdr->addr, pool->s.user_align)) { > - ODP_ABORT("check_align: user data align error %p, align %zu\n", > - hdr->addr, pool->s.user_align); > - } > - > - if (!ODP_ALIGNED_CHECK_POWER_2(hdr, ODP_CACHE_LINE_SIZE)) { > - ODP_ABORT("check_align: hdr align error %p, align %i\n", > - hdr, ODP_CACHE_LINE_SIZE); > - } > -} > - > + uint32_t blk_size, buf_stride; > > -static void fill_hdr(void *ptr, pool_entry_t *pool, uint32_t index, > - int buf_type) > -{ > - odp_buffer_hdr_t *hdr = (odp_buffer_hdr_t *)ptr; > - size_t size = pool->s.hdr_size; > - uint8_t *buf_data; > - > - if (buf_type == ODP_BUFFER_TYPE_CHUNK) > - size = sizeof(odp_buffer_chunk_hdr_t); > + switch (params->buf_type) { > + case ODP_BUFFER_TYPE_RAW: > + blk_size = params->buf_size; > > - switch (pool->s.buf_type) { > - odp_raw_buffer_hdr_t *raw_hdr; > - odp_packet_hdr_t *packet_hdr; > - odp_timeout_hdr_t *tmo_hdr; > - odp_any_buffer_hdr_t *any_hdr; > + /* Optimize small raw buffers */ > + if (blk_size > ODP_MAX_INLINE_BUF) > + blk_size = ODP_ALIGN_ROUNDUP(blk_size, TAG_ALIGN); > > - case ODP_BUFFER_TYPE_RAW: > - raw_hdr = ptr; > - buf_data = raw_hdr->buf_data; > + buf_stride = sizeof(odp_buffer_hdr_stride); > break; > + > case ODP_BUFFER_TYPE_PACKET: > - packet_hdr = ptr; > - buf_data = packet_hdr->buf_data; > + if (unsegmented) > + blk_size = > + ODP_CACHE_LINE_SIZE_ROUNDUP(params->buf_size); > + else > + blk_size = ODP_ALIGN_ROUNDUP(params->buf_size, > + ODP_CONFIG_BUF_SEG_SIZE); > + buf_stride = sizeof(odp_packet_hdr_stride); > break; > + > case ODP_BUFFER_TYPE_TIMEOUT: > - tmo_hdr = ptr; > - buf_data = tmo_hdr->buf_data; > + blk_size = 0; /* Timeouts have no block data, only metadata */ > + buf_stride = sizeof(odp_timeout_hdr_stride); > break; > + > case ODP_BUFFER_TYPE_ANY: > - any_hdr = ptr; > - buf_data = any_hdr->buf_data; > + if (unsegmented) > + blk_size = > + ODP_CACHE_LINE_SIZE_ROUNDUP(params->buf_size); > + else > + blk_size = ODP_ALIGN_ROUNDUP(params->buf_size, > + ODP_CONFIG_BUF_SEG_SIZE); > + buf_stride = sizeof(odp_any_hdr_stride); > break; > - default: > - ODP_ABORT("Bad buffer type\n"); > - } > - > - memset(hdr, 0, size); > - > - set_handle(hdr, pool, index); > - > - hdr->addr = &buf_data[pool->s.buf_offset - pool->s.hdr_size]; > - hdr->index = index; > - hdr->size = pool->s.user_size; > - hdr->pool_hdl = pool->s.pool_hdl; > - hdr->type = buf_type; > - > - check_align(pool, hdr); > -} > - > - > -static void link_bufs(pool_entry_t *pool) > -{ > - odp_buffer_chunk_hdr_t *chunk_hdr; > - size_t hdr_size; > - size_t data_size; > - size_t data_align; > - size_t tot_size; > - size_t offset; > - size_t min_size; > - uint64_t pool_size; > - uintptr_t buf_base; > - uint32_t index; > - uintptr_t pool_base; > - int buf_type; > - > - buf_type = pool->s.buf_type; > - data_size = pool->s.user_size; > - data_align = pool->s.user_align; > - pool_size = pool->s.pool_size; > - pool_base = (uintptr_t) pool->s.pool_base_addr; > - > - if (buf_type == ODP_BUFFER_TYPE_RAW) { > - hdr_size = sizeof(odp_raw_buffer_hdr_t); > - } else if (buf_type == ODP_BUFFER_TYPE_PACKET) { > - hdr_size = sizeof(odp_packet_hdr_t); > - } else if (buf_type == ODP_BUFFER_TYPE_TIMEOUT) { > - hdr_size = sizeof(odp_timeout_hdr_t); > - } else if (buf_type == ODP_BUFFER_TYPE_ANY) { > - hdr_size = sizeof(odp_any_buffer_hdr_t); > - } else > - ODP_ABORT("odp_buffer_pool_create: Bad type %i\n", buf_type); > - > - > - /* Chunk must fit into buffer data area.*/ > - min_size = sizeof(odp_buffer_chunk_hdr_t) - hdr_size; > - if (data_size < min_size) > - data_size = min_size; > - > - /* Roundup data size to full cachelines */ > - data_size = ODP_CACHE_LINE_SIZE_ROUNDUP(data_size); > - > - /* Min cacheline alignment for buffer header and data */ > - data_align = ODP_CACHE_LINE_SIZE_ROUNDUP(data_align); > - offset = ODP_CACHE_LINE_SIZE_ROUNDUP(hdr_size); > - > - /* Multiples of cacheline size */ > - if (data_size > data_align) > - tot_size = data_size + offset; > - else > - tot_size = data_align + offset; > - > - /* First buffer */ > - buf_base = ODP_ALIGN_ROUNDUP(pool_base + offset, data_align) - offset; > - > - pool->s.hdr_size = hdr_size; > - pool->s.buf_base = buf_base; > - pool->s.buf_size = tot_size; > - pool->s.buf_offset = offset; > - index = 0; > - > - chunk_hdr = (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, index); > - pool->s.head = NULL; > - pool_size -= buf_base - pool_base; > - > - while (pool_size > ODP_BUFS_PER_CHUNK * tot_size) { > - int i; > - > - fill_hdr(chunk_hdr, pool, index, ODP_BUFFER_TYPE_CHUNK); > - > - index++; > - > - for (i = 0; i < ODP_BUFS_PER_CHUNK - 1; i++) { > - odp_buffer_hdr_t *hdr = index_to_hdr(pool, index); > - > - fill_hdr(hdr, pool, index, buf_type); > - > - add_buf_index(chunk_hdr, index); > - index++; > - } > - > - add_chunk(pool, chunk_hdr); > > - chunk_hdr = (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, > - index); > - pool->s.num_bufs += ODP_BUFS_PER_CHUNK; > - pool_size -= ODP_BUFS_PER_CHUNK * tot_size; > + default: > + return ODP_BUFFER_POOL_INVALID; > } > -} > - > - > -odp_buffer_pool_t odp_buffer_pool_create(const char *name, > - void *base_addr, uint64_t size, > - size_t buf_size, size_t buf_align, > - int buf_type) > -{ > - odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID; > - pool_entry_t *pool; > - uint32_t i; > > + /* Find an unused buffer pool slot and iniitalize it as requested */ > for (i = 0; i < ODP_CONFIG_BUFFER_POOLS; i++) { > pool = get_pool_entry(i); > > LOCK(&pool->s.lock); > + if (pool->s.pool_shm != ODP_SHM_INVALID) { > + UNLOCK(&pool->s.lock); > + continue; > + } > + > + /* found free pool */ > + size_t block_size, mdata_size, udata_size; > > - if (pool->s.buf_base == 0) { > - /* found free pool */ > + pool->s.flags.all = 0; > > + if (name == NULL) { > + pool->s.name[0] = 0; > + } else { > strncpy(pool->s.name, name, > ODP_BUFFER_POOL_NAME_LEN - 1); > pool->s.name[ODP_BUFFER_POOL_NAME_LEN - 1] = 0; > - pool->s.pool_base_addr = base_addr; > - pool->s.pool_size = size; > - pool->s.user_size = buf_size; > - pool->s.user_align = buf_align; > - pool->s.buf_type = buf_type; > - > - link_bufs(pool); > - > - UNLOCK(&pool->s.lock); > + pool->s.flags.has_name = 1; > + } > > - pool_hdl = pool->s.pool_hdl; > - break; > + pool->s.params = *params; > + pool->s.init_params = *init_params; > + > + mdata_size = params->num_bufs * buf_stride; > + udata_size = params->num_bufs * udata_stride; > + > + /* Optimize for short buffers: Data stored in buffer hdr */ > + if (blk_size <= ODP_MAX_INLINE_BUF) > + block_size = 0; > + else > + block_size = params->num_bufs * blk_size; > + > + pool->s.pool_size = ODP_PAGE_SIZE_ROUNDUP(mdata_size + > + udata_size + > + block_size); > + > + if (shm == ODP_SHM_NULL) { > + shm = odp_shm_reserve(pool->s.name, > + pool->s.pool_size, > + ODP_PAGE_SIZE, 0); > + if (shm == ODP_SHM_INVALID) { > + UNLOCK(&pool->s.lock); > + return ODP_BUFFER_INVALID; > + } > + pool->s.pool_base_addr = odp_shm_addr(shm); > + } else { > + odp_shm_info_t info; > + if (odp_shm_info(shm, &info) != 0 || > + info.size < pool->s.pool_size) { > + UNLOCK(&pool->s.lock); > + return ODP_BUFFER_POOL_INVALID; > + } > + pool->s.pool_base_addr = odp_shm_addr(shm); > + void *page_addr = > + ODP_ALIGN_ROUNDUP_PTR(pool->s.pool_base_addr, > + ODP_PAGE_SIZE); > + if (pool->s.pool_base_addr != page_addr) { > + if (info.size < pool->s.pool_size + > + ((size_t)page_addr - > + (size_t)pool->s.pool_base_addr)) { > + UNLOCK(&pool->s.lock); > + return ODP_BUFFER_POOL_INVALID; > + } > + pool->s.pool_base_addr = page_addr; > + } > + pool->s.flags.user_supplied_shm = 1; > } > > + pool->s.pool_shm = shm; > + > + /* Now safe to unlock since pool entry has been allocated */ > UNLOCK(&pool->s.lock); > + > + pool->s.flags.unsegmented = unsegmented; > + pool->s.flags.zeroized = zeroized; > + pool->s.seg_size = unsegmented ? > + blk_size : ODP_CONFIG_BUF_SEG_SIZE; > + > + uint8_t *udata_base_addr = pool->s.pool_base_addr + mdata_size; > + uint8_t *block_base_addr = udata_base_addr + udata_size; > + > + /* bufcount will decrement down to 0 as we populate freelist */ > + odp_atomic_store_u32(&pool->s.bufcount, params->num_bufs); > + pool->s.buf_stride = buf_stride; > + pool->s.high_wm = 0; > + pool->s.low_wm = 0; > + pool->s.headroom = 0; > + pool->s.tailroom = 0; > + _odp_atomic_ptr_store(&pool->s.buf_freelist, NULL, > + _ODP_MEMMODEL_RLX); > + _odp_atomic_ptr_store(&pool->s.blk_freelist, NULL, > + _ODP_MEMMODEL_RLX); > + > + uint8_t *buf = udata_base_addr - buf_stride; > + uint8_t *udat = udata_stride == 0 ? NULL : > + block_base_addr - udata_stride; > + > + /* Init buffer common header and add to pool buffer freelist */ > + do { > + odp_buffer_hdr_t *tmp = > + (odp_buffer_hdr_t *)(void *)buf; > + > + /* Iniitalize buffer metadata */ > + tmp->allocator = ODP_CONFIG_MAX_THREADS; > + tmp->flags.all = 0; > + tmp->flags.zeroized = zeroized; > + tmp->size = 0; > + odp_atomic_store_u32(&tmp->ref_count, 0); > + tmp->type = params->buf_type; > + tmp->pool_hdl = pool->s.pool_hdl; > + tmp->udata_addr = (void *)udat; > + tmp->udata_size = init_params->udata_size; > + tmp->segcount = 0; > + tmp->segsize = pool->s.seg_size; > + tmp->handle.handle = odp_buffer_encode_handle(tmp); > + > + /* Set 1st seg addr for zero-len buffers */ > + tmp->addr[0] = NULL; > + > + /* Special case for short buffer data */ > + if (blk_size <= ODP_MAX_INLINE_BUF) { > + tmp->flags.hdrdata = 1; > + if (blk_size > 0) { > + tmp->segcount = 1; > + tmp->addr[0] = &tmp->addr[1]; > + tmp->size = blk_size; > + } > + } > + > + /* Push buffer onto pool's freelist */ > + ret_buf(&pool->s, tmp); > + buf -= buf_stride; > + udat -= udata_stride; > + } while (buf >= pool->s.pool_base_addr); > + > + /* Form block freelist for pool */ > + uint8_t *blk = pool->s.pool_base_addr + pool->s.pool_size - > + pool->s.seg_size; > + > + if (blk_size > ODP_MAX_INLINE_BUF) > + do { > + ret_blk(&pool->s, blk); > + blk -= pool->s.seg_size; > + } while (blk >= block_base_addr); > + > + /* Initialize pool statistics counters */ > + odp_atomic_store_u64(&pool->s.bufallocs, 0); > + odp_atomic_store_u64(&pool->s.buffrees, 0); > + odp_atomic_store_u64(&pool->s.blkallocs, 0); > + odp_atomic_store_u64(&pool->s.blkfrees, 0); > + odp_atomic_store_u64(&pool->s.bufempty, 0); > + odp_atomic_store_u64(&pool->s.blkempty, 0); > + odp_atomic_store_u64(&pool->s.high_wm_count, 0); > + odp_atomic_store_u64(&pool->s.low_wm_count, 0); > + > + pool_hdl = pool->s.pool_hdl; > + break; > } > > return pool_hdl; > @@ -431,145 +353,126 @@ odp_buffer_pool_t odp_buffer_pool_lookup(const char *name) > return ODP_BUFFER_POOL_INVALID; > } > > - > -odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl) > +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool_hdl, size_t size) > { > - pool_entry_t *pool; > - odp_buffer_chunk_hdr_t *chunk; > - odp_buffer_bits_t handle; > - uint32_t pool_id = pool_handle_to_index(pool_hdl); > - > - pool = get_pool_entry(pool_id); > - chunk = local_chunk[pool_id]; > - > - if (chunk == NULL) { > - LOCK(&pool->s.lock); > - chunk = rem_chunk(pool); > - UNLOCK(&pool->s.lock); > - > - if (chunk == NULL) > - return ODP_BUFFER_INVALID; > - > - local_chunk[pool_id] = chunk; > + pool_entry_t *pool = odp_pool_to_entry(pool_hdl); > + size_t totsize = pool->s.headroom + size + pool->s.tailroom; > + odp_anybuf_t *buf; > + uint8_t *blk; > + > + if ((pool->s.flags.unsegmented && totsize > pool->s.seg_size) || > + (!pool->s.flags.unsegmented && totsize > ODP_CONFIG_BUF_MAX_SIZE)) > + return ODP_BUFFER_INVALID; > + > + buf = (odp_anybuf_t *)(void *)get_buf(&pool->s); > + > + if (buf == NULL) > + return ODP_BUFFER_INVALID; > + > + /* Get blocks for this buffer, if pool uses application data */ > + if (buf->buf.size < totsize) { > + size_t needed = totsize - buf->buf.size; > + do { > + blk = get_blk(&pool->s); > + if (blk == NULL) { > + ret_buf(&pool->s, &buf->buf); > + return ODP_BUFFER_INVALID; > + } > + buf->buf.addr[buf->buf.segcount++] = blk; > + needed -= pool->s.seg_size; > + } while ((ssize_t)needed > 0); > + buf->buf.size = buf->buf.segcount * pool->s.seg_size; > } > > - if (chunk->chunk.num_bufs == 0) { > - /* give the chunk buffer */ > - local_chunk[pool_id] = NULL; > - chunk->buf_hdr.type = pool->s.buf_type; > + /* By default, buffers inherit their pool's zeroization setting */ > + buf->buf.flags.zeroized = pool->s.flags.zeroized; > > - handle = chunk->buf_hdr.handle; > - } else { > - odp_buffer_hdr_t *hdr; > - uint32_t index; > - index = rem_buf_index(chunk); > - hdr = index_to_hdr(pool, index); > + if (buf->buf.type == ODP_BUFFER_TYPE_PACKET) { > + packet_init(pool, &buf->pkt, size); > > - handle = hdr->handle; > + if (pool->s.init_params.buf_init != NULL) > + (*pool->s.init_params.buf_init) > + (buf->buf.handle.handle, > + pool->s.init_params.buf_init_arg); > } > > - return handle.u32; > + return odp_hdr_to_buf(&buf->buf); > } > > - > -void odp_buffer_free(odp_buffer_t buf) > +odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl) > { > - odp_buffer_hdr_t *hdr; > - uint32_t pool_id; > - pool_entry_t *pool; > - odp_buffer_chunk_hdr_t *chunk_hdr; > - > - hdr = odp_buf_to_hdr(buf); > - pool_id = pool_handle_to_index(hdr->pool_hdl); > - pool = get_pool_entry(pool_id); > - chunk_hdr = local_chunk[pool_id]; > - > - if (chunk_hdr && chunk_hdr->chunk.num_bufs == ODP_BUFS_PER_CHUNK - 1) { > - /* Current chunk is full. Push back to the pool */ > - LOCK(&pool->s.lock); > - add_chunk(pool, chunk_hdr); > - UNLOCK(&pool->s.lock); > - chunk_hdr = NULL; > - } > - > - if (chunk_hdr == NULL) { > - /* Use this buffer */ > - chunk_hdr = (odp_buffer_chunk_hdr_t *)hdr; > - local_chunk[pool_id] = chunk_hdr; > - chunk_hdr->chunk.num_bufs = 0; > - } else { > - /* Add to current chunk */ > - add_buf_index(chunk_hdr, hdr->index); > - } > + return buffer_alloc(pool_hdl, > + odp_pool_to_entry(pool_hdl)->s.params.buf_size); > } > > - > -odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf) > +void odp_buffer_free(odp_buffer_t buf) > { > - odp_buffer_hdr_t *hdr; > - > - hdr = odp_buf_to_hdr(buf); > - return hdr->pool_hdl; > + odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(buf); > + pool_entry_t *pool = odp_buf_to_pool(buf_hdr); > + ret_buf(&pool->s, buf_hdr); > } > > - > void odp_buffer_pool_print(odp_buffer_pool_t pool_hdl) > { > pool_entry_t *pool; > - odp_buffer_chunk_hdr_t *chunk_hdr; > - uint32_t i; > uint32_t pool_id; > > pool_id = pool_handle_to_index(pool_hdl); > pool = get_pool_entry(pool_id); > > - ODP_PRINT("Pool info\n"); > - ODP_PRINT("---------\n"); > - ODP_PRINT(" pool %i\n", pool->s.pool_hdl); > - ODP_PRINT(" name %s\n", pool->s.name); > - ODP_PRINT(" pool base %p\n", pool->s.pool_base_addr); > - ODP_PRINT(" buf base 0x%"PRIxPTR"\n", pool->s.buf_base); > - ODP_PRINT(" pool size 0x%"PRIx64"\n", pool->s.pool_size); > - ODP_PRINT(" buf size %zu\n", pool->s.user_size); > - ODP_PRINT(" buf align %zu\n", pool->s.user_align); > - ODP_PRINT(" hdr size %zu\n", pool->s.hdr_size); > - ODP_PRINT(" alloc size %zu\n", pool->s.buf_size); > - ODP_PRINT(" offset to hdr %zu\n", pool->s.buf_offset); > - ODP_PRINT(" num bufs %"PRIu64"\n", pool->s.num_bufs); > - ODP_PRINT(" free bufs %"PRIu64"\n", pool->s.free_bufs); > - > - /* first chunk */ > - chunk_hdr = pool->s.head; > - > - if (chunk_hdr == NULL) { > - ODP_ERR(" POOL EMPTY\n"); > - return; > - } > - > - ODP_PRINT("\n First chunk\n"); > - > - for (i = 0; i < chunk_hdr->chunk.num_bufs - 1; i++) { > - uint32_t index; > - odp_buffer_hdr_t *hdr; > - > - index = chunk_hdr->chunk.buf_index[i]; > - hdr = index_to_hdr(pool, index); > - > - ODP_PRINT(" [%i] addr %p, id %"PRIu32"\n", i, hdr->addr, > - index); > - } > - > - ODP_PRINT(" [%i] addr %p, id %"PRIu32"\n", i, chunk_hdr->buf_hdr.addr, > - chunk_hdr->buf_hdr.index); > - > - /* next chunk */ > - chunk_hdr = next_chunk(pool, chunk_hdr); > + uint32_t bufcount = odp_atomic_load_u32(&pool->s.bufcount); > + uint32_t blkcount = odp_atomic_load_u32(&pool->s.blkcount); > + uint64_t bufallocs = odp_atomic_load_u64(&pool->s.bufallocs); > + uint64_t buffrees = odp_atomic_load_u64(&pool->s.buffrees); > + uint64_t blkallocs = odp_atomic_load_u64(&pool->s.blkallocs); > + uint64_t blkfrees = odp_atomic_load_u64(&pool->s.blkfrees); > + uint64_t bufempty = odp_atomic_load_u64(&pool->s.bufempty); > + uint64_t blkempty = odp_atomic_load_u64(&pool->s.blkempty); > + uint64_t hiwmct = odp_atomic_load_u64(&pool->s.high_wm_count); > + uint64_t lowmct = odp_atomic_load_u64(&pool->s.low_wm_count); > + > + ODP_DBG("Pool info\n"); > + ODP_DBG("---------\n"); > + ODP_DBG(" pool %i\n", pool->s.pool_hdl); > + ODP_DBG(" name %s\n", > + pool->s.flags.has_name ? pool->s.name : "Unnamed Pool"); > + ODP_DBG(" pool type %s\n", > + pool->s.params.buf_type == ODP_BUFFER_TYPE_RAW ? "raw" : > + (pool->s.params.buf_type == ODP_BUFFER_TYPE_PACKET ? "packet" : > + (pool->s.params.buf_type == ODP_BUFFER_TYPE_TIMEOUT ? "timeout" : > + (pool->s.params.buf_type == ODP_BUFFER_TYPE_ANY ? "any" : > + "unknown")))); > + ODP_DBG(" pool storage %sODP managed\n", > + pool->s.flags.user_supplied_shm ? > + "application provided, " : ""); > + ODP_DBG(" pool status %s\n", > + pool->s.flags.quiesced ? "quiesced" : "active"); > + ODP_DBG(" pool opts %s, %s, %s\n", > + pool->s.flags.unsegmented ? "unsegmented" : "segmented", > + pool->s.flags.zeroized ? "zeroized" : "non-zeroized", > + pool->s.flags.predefined ? "predefined" : "created"); > + ODP_DBG(" pool base %p\n", pool->s.pool_base_addr); > + ODP_DBG(" pool size %zu (%zu pages)\n", > + pool->s.pool_size, pool->s.pool_size / ODP_PAGE_SIZE); > + ODP_DBG(" udata size %zu\n", pool->s.init_params.udata_size); > + ODP_DBG(" buf size %zu\n", pool->s.params.buf_size); > + ODP_DBG(" num bufs %u\n", pool->s.params.num_bufs); > + ODP_DBG(" bufs in use %u\n", bufcount); > + ODP_DBG(" buf allocs %lu\n", bufallocs); > + ODP_DBG(" buf frees %lu\n", buffrees); > + ODP_DBG(" buf empty %lu\n", bufempty); > + ODP_DBG(" blk size %zu\n", > + pool->s.seg_size > ODP_MAX_INLINE_BUF ? pool->s.seg_size : 0); > + ODP_DBG(" blks available %u\n", blkcount); > + ODP_DBG(" blk allocs %lu\n", blkallocs); > + ODP_DBG(" blk frees %lu\n", blkfrees); > + ODP_DBG(" blk empty %lu\n", blkempty); > + ODP_DBG(" high wm count %lu\n", hiwmct); > + ODP_DBG(" low wm count %lu\n", lowmct); > +} > > - if (chunk_hdr) { > - ODP_PRINT(" Next chunk\n"); > - ODP_PRINT(" addr %p, id %"PRIu32"\n", chunk_hdr->buf_hdr.addr, > - chunk_hdr->buf_hdr.index); > - } > > - ODP_PRINT("\n"); > +odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf) > +{ > + return odp_buf_to_hdr(buf)->pool_hdl; > } > diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c > index f8fd8ef..8deae3d 100644 > --- a/platform/linux-generic/odp_packet.c > +++ b/platform/linux-generic/odp_packet.c > @@ -23,17 +23,9 @@ static inline uint8_t parse_ipv6(odp_packet_hdr_t *pkt_hdr, > void odp_packet_init(odp_packet_t pkt) > { > odp_packet_hdr_t *const pkt_hdr = odp_packet_hdr(pkt); > - const size_t start_offset = ODP_FIELD_SIZEOF(odp_packet_hdr_t, buf_hdr); > - uint8_t *start; > - size_t len; > - > - start = (uint8_t *)pkt_hdr + start_offset; > - len = ODP_OFFSETOF(odp_packet_hdr_t, buf_data) - start_offset; > - memset(start, 0, len); > + pool_entry_t *pool = odp_buf_to_pool(&pkt_hdr->buf_hdr); > > - pkt_hdr->l2_offset = ODP_PACKET_OFFSET_INVALID; > - pkt_hdr->l3_offset = ODP_PACKET_OFFSET_INVALID; > - pkt_hdr->l4_offset = ODP_PACKET_OFFSET_INVALID; > + packet_init(pool, pkt_hdr, 0); > } > > odp_packet_t odp_packet_from_buffer(odp_buffer_t buf) > @@ -63,7 +55,7 @@ uint8_t *odp_packet_addr(odp_packet_t pkt) > > uint8_t *odp_packet_data(odp_packet_t pkt) > { > - return odp_packet_addr(pkt) + odp_packet_hdr(pkt)->frame_offset; > + return odp_packet_addr(pkt) + odp_packet_hdr(pkt)->headroom; > } > > > @@ -130,20 +122,13 @@ void odp_packet_set_l4_offset(odp_packet_t pkt, size_t offset) > > int odp_packet_is_segmented(odp_packet_t pkt) > { > - odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)pkt); > - > - if (buf_hdr->scatter.num_bufs == 0) > - return 0; > - else > - return 1; > + return odp_packet_hdr(pkt)->buf_hdr.segcount > 1; > } > > > int odp_packet_seg_count(odp_packet_t pkt) > { > - odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)pkt); > - > - return (int)buf_hdr->scatter.num_bufs + 1; > + return odp_packet_hdr(pkt)->buf_hdr.segcount; > } > > > @@ -169,7 +154,7 @@ void odp_packet_parse(odp_packet_t pkt, size_t len, size_t frame_offset) > uint8_t ip_proto = 0; > > pkt_hdr->input_flags.eth = 1; > - pkt_hdr->frame_offset = frame_offset; > + pkt_hdr->l2_offset = frame_offset; > pkt_hdr->frame_len = len; > > if (len > ODPH_ETH_LEN_MAX) > @@ -329,8 +314,6 @@ void odp_packet_print(odp_packet_t pkt) > len += snprintf(&str[len], n-len, > " output_flags 0x%x\n", hdr->output_flags.all); > len += snprintf(&str[len], n-len, > - " frame_offset %u\n", hdr->frame_offset); > - len += snprintf(&str[len], n-len, > " l2_offset %u\n", hdr->l2_offset); > len += snprintf(&str[len], n-len, > " l3_offset %u\n", hdr->l3_offset); > @@ -357,14 +340,13 @@ int odp_packet_copy(odp_packet_t pkt_dst, odp_packet_t pkt_src) > if (pkt_dst == ODP_PACKET_INVALID || pkt_src == ODP_PACKET_INVALID) > return -1; > > - if (pkt_hdr_dst->buf_hdr.size < > - pkt_hdr_src->frame_len + pkt_hdr_src->frame_offset) > + if (pkt_hdr_dst->buf_hdr.size < pkt_hdr_src->frame_len) > return -1; > > /* Copy packet header */ > start_dst = (uint8_t *)pkt_hdr_dst + start_offset; > start_src = (uint8_t *)pkt_hdr_src + start_offset; > - len = ODP_OFFSETOF(odp_packet_hdr_t, buf_data) - start_offset; > + len = sizeof(odp_packet_hdr_t) - start_offset; > memcpy(start_dst, start_src, len); > > /* Copy frame payload */ > @@ -373,13 +355,6 @@ int odp_packet_copy(odp_packet_t pkt_dst, odp_packet_t pkt_src) > len = pkt_hdr_src->frame_len; > memcpy(start_dst, start_src, len); > > - /* Copy useful things from the buffer header */ > - pkt_hdr_dst->buf_hdr.cur_offset = pkt_hdr_src->buf_hdr.cur_offset; > - > - /* Create a copy of the scatter list */ > - odp_buffer_copy_scatter(odp_packet_to_buffer(pkt_dst), > - odp_packet_to_buffer(pkt_src)); > - > return 0; > } > > diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c > index 1318bcd..b68a7c7 100644 > --- a/platform/linux-generic/odp_queue.c > +++ b/platform/linux-generic/odp_queue.c > @@ -11,6 +11,7 @@ > #include <odp_buffer.h> > #include <odp_buffer_internal.h> > #include <odp_buffer_pool_internal.h> > +#include <odp_buffer_inlines.h> > #include <odp_internal.h> > #include <odp_shared_memory.h> > #include <odp_schedule_internal.h> > diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c > index cc84e11..a8f1938 100644 > --- a/platform/linux-generic/odp_schedule.c > +++ b/platform/linux-generic/odp_schedule.c > @@ -83,8 +83,8 @@ int odp_schedule_init_global(void) > { > odp_shm_t shm; > odp_buffer_pool_t pool; > - void *pool_base; > int i, j; > + odp_buffer_pool_param_t params; > > ODP_DBG("Schedule init ... "); > > @@ -99,20 +99,12 @@ int odp_schedule_init_global(void) > return -1; > } > > - shm = odp_shm_reserve("odp_sched_pool", > - SCHED_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > + params.buf_size = sizeof(queue_desc_t); > + params.buf_align = ODP_CACHE_LINE_SIZE; > + params.num_bufs = SCHED_POOL_SIZE/sizeof(queue_desc_t); > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > - pool_base = odp_shm_addr(shm); > - > - if (pool_base == NULL) { > - ODP_ERR("Schedule init: Shm reserve failed.\n"); > - return -1; > - } > - > - pool = odp_buffer_pool_create("odp_sched_pool", pool_base, > - SCHED_POOL_SIZE, sizeof(queue_desc_t), > - ODP_CACHE_LINE_SIZE, > - ODP_BUFFER_TYPE_RAW); > + pool = odp_buffer_pool_create("odp_sched_pool", ODP_SHM_NULL, ¶ms); > > if (pool == ODP_BUFFER_POOL_INVALID) { > ODP_ERR("Schedule init: Pool create failed.\n"); > diff --git a/platform/linux-generic/odp_timer.c b/platform/linux-generic/odp_timer.c > index 313c713..914cb58 100644 > --- a/platform/linux-generic/odp_timer.c > +++ b/platform/linux-generic/odp_timer.c > @@ -5,9 +5,10 @@ > */ > > #include <odp_timer.h> > -#include <odp_timer_internal.h> > #include <odp_time.h> > #include <odp_buffer_pool_internal.h> > +#include <odp_buffer_inlines.h> > +#include <odp_timer_internal.h> > #include <odp_internal.h> > #include <odp_atomic.h> > #include <odp_spinlock.h> > diff --git a/test/api_test/odp_timer_ping.c b/test/api_test/odp_timer_ping.c > index 7704181..1566f4f 100644 > --- a/test/api_test/odp_timer_ping.c > +++ b/test/api_test/odp_timer_ping.c > @@ -319,9 +319,8 @@ int main(int argc ODP_UNUSED, char *argv[] ODP_UNUSED) > ping_arg_t pingarg; > odp_queue_t queue; > odp_buffer_pool_t pool; > - void *pool_base; > int i; > - odp_shm_t shm; > + odp_buffer_pool_param_t params; > > if (odp_test_global_init() != 0) > return -1; > @@ -334,14 +333,14 @@ int main(int argc ODP_UNUSED, char *argv[] ODP_UNUSED) > /* > * Create message pool > */ > - shm = odp_shm_reserve("msg_pool", > - MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > - pool_base = odp_shm_addr(shm); > - > - pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, > - BUF_SIZE, > - ODP_CACHE_LINE_SIZE, > - ODP_BUFFER_TYPE_RAW); > + > + params.buf_size = BUF_SIZE; > + params.buf_align = 0; > + params.num_bufs = MSG_POOL_SIZE/BUF_SIZE; > + params.buf_type = ODP_BUFFER_TYPE_RAW; > + > + pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, ¶ms); > + > if (pool == ODP_BUFFER_POOL_INVALID) { > LOG_ERR("Pool create failed.\n"); > return -1; > diff --git a/test/validation/odp_crypto.c b/test/validation/odp_crypto.c > index 9342aca..e329b05 100644 > --- a/test/validation/odp_crypto.c > +++ b/test/validation/odp_crypto.c > @@ -31,8 +31,7 @@ CU_SuiteInfo suites[] = { > > int main(void) > { > - odp_shm_t shm; > - void *pool_base; > + odp_buffer_pool_param_t params; > odp_buffer_pool_t pool; > odp_queue_t out_queue; > > @@ -42,21 +41,13 @@ int main(void) > } > odp_init_local(); > > - shm = odp_shm_reserve("shm_packet_pool", > - SHM_PKT_POOL_SIZE, > - ODP_CACHE_LINE_SIZE, 0); > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > + params.buf_align = 0; > + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > - pool_base = odp_shm_addr(shm); > - if (!pool_base) { > - fprintf(stderr, "Packet pool allocation failed.\n"); > - return -1; > - } > + pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, ¶ms); > > - pool = odp_buffer_pool_create("packet_pool", pool_base, > - SHM_PKT_POOL_SIZE, > - SHM_PKT_POOL_BUF_SIZE, > - ODP_CACHE_LINE_SIZE, > - ODP_BUFFER_TYPE_PACKET); > if (ODP_BUFFER_POOL_INVALID == pool) { > fprintf(stderr, "Packet pool creation failed.\n"); > return -1; > @@ -67,20 +58,14 @@ int main(void) > fprintf(stderr, "Crypto outq creation failed.\n"); > return -1; > } > - shm = odp_shm_reserve("shm_compl_pool", > - SHM_COMPL_POOL_SIZE, > - ODP_CACHE_LINE_SIZE, > - ODP_SHM_SW_ONLY); > - pool_base = odp_shm_addr(shm); > - if (!pool_base) { > - fprintf(stderr, "Completion pool allocation failed.\n"); > - return -1; > - } > - pool = odp_buffer_pool_create("compl_pool", pool_base, > - SHM_COMPL_POOL_SIZE, > - SHM_COMPL_POOL_BUF_SIZE, > - ODP_CACHE_LINE_SIZE, > - ODP_BUFFER_TYPE_RAW); > + > + params.buf_size = SHM_COMPL_POOL_BUF_SIZE; > + params.buf_align = 0; > + params.num_bufs = SHM_COMPL_POOL_SIZE/SHM_COMPL_POOL_BUF_SIZE; > + params.buf_type = ODP_BUFFER_TYPE_RAW; > + > + pool = odp_buffer_pool_create("compl_pool", ODP_SHM_NULL, ¶ms); > + > if (ODP_BUFFER_POOL_INVALID == pool) { > fprintf(stderr, "Completion pool creation failed.\n"); > return -1; > diff --git a/test/validation/odp_queue.c b/test/validation/odp_queue.c > index 09dba0e..9d0f3d7 100644 > --- a/test/validation/odp_queue.c > +++ b/test/validation/odp_queue.c > @@ -16,21 +16,14 @@ static int queue_contest = 0xff; > static int test_odp_buffer_pool_init(void) > { > odp_buffer_pool_t pool; > - void *pool_base; > - odp_shm_t shm; > + odp_buffer_pool_param_t params; > > - shm = odp_shm_reserve("msg_pool", > - MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > + params.buf_size = 0; > + params.buf_align = ODP_CACHE_LINE_SIZE; > + params.num_bufs = 1024 * 10; > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > - pool_base = odp_shm_addr(shm); > - > - if (NULL == pool_base) { > - printf("Shared memory reserve failed.\n"); > - return -1; > - } > - > - pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, 0, > - ODP_CACHE_LINE_SIZE, ODP_BUFFER_TYPE_RAW); > + pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, ¶ms); > > if (ODP_BUFFER_POOL_INVALID == pool) { > printf("Pool create failed.\n"); > -- > 1.8.3.2 > > > _______________________________________________ > lng-odp mailing list > lng-odp@lists.linaro.org > http://lists.linaro.org/mailman/listinfo/lng-odp
Hi, This is the proposed way to break up your patch: 1. break circular dependencies 2. move inline functions to a new "odp_buffer_inlines.h" file. 3. restructuring ODP buffer pool 4. odp_buffer_pool_create 5. odp_buffer_pool_destroy 6. odp_buffer_pool_info see more comments inline. On 2 December 2014 at 22:50, Bill Fischofer <bill.fischofer@linaro.org> wrote: > > > On Tue, Dec 2, 2014 at 3:05 PM, Anders Roxell <anders.roxell@linaro.org> > wrote: >> >> prefix this patch with: >> api: ... >> >> On 2014-12-02 13:17, Bill Fischofer wrote: >> > Restructure ODP buffer pool internals to support new APIs. >> >> The comment doesn't add any extra value from the short log. >> "Modifys linux-generic, example and test to make them ready for adding the >> new odp_buffer_pool_create API" > > > The comment is descriptive of what's in the patch. > >> >> >> > Implements new odp_buffer_pool_create() API. >> > >> > Signed-off-by: Bill Fischofer <bill.fischofer@linaro.org> >> > --- >> > example/generator/odp_generator.c | 19 +- >> > example/ipsec/odp_ipsec.c | 57 +- >> > example/l2fwd/odp_l2fwd.c | 19 +- >> > example/odp_example/odp_example.c | 18 +- >> > example/packet/odp_pktio.c | 19 +- >> > example/timer/odp_timer_test.c | 13 +- >> > .../linux-generic/include/api/odp_buffer_pool.h | 91 ++- >> > platform/linux-generic/include/api/odp_config.h | 10 + >> > .../linux-generic/include/api/odp_platform_types.h | 9 + >> >> Group stuff into odp_platform_types.h should be its own patch. >> > > The change to odp_platform_types.h moves typedefs from odp_shared_memory.h > to break > circular dependencies that would otherwise arise. As a result, this is not > separable from > the rest of this patch. don't agree. > > >> >> > .../linux-generic/include/api/odp_shared_memory.h | 10 +- >> > .../linux-generic/include/odp_buffer_inlines.h | 157 +++++ >> >> Creating an inline file should be its own patch. > > > No, it's not independent of the rest of these changes. This is a > restructuring patch. The rule that > you've promoted is that each patch can be applied independently. Trying to > make this it's own > patch wouldn't follow that rule. Good that you are trying. You are saying "ODP buffer pool restructure" in the short log, please do that and *only* that in this patch then! Do not add new APIs or change existing APIs, only restructure! > >> >> >> > .../linux-generic/include/odp_buffer_internal.h | 137 ++-- >> > .../include/odp_buffer_pool_internal.h | 278 ++++++-- >> > .../linux-generic/include/odp_packet_internal.h | 50 +- >> > .../linux-generic/include/odp_timer_internal.h | 11 +- >> > platform/linux-generic/odp_buffer.c | 31 +- >> > platform/linux-generic/odp_buffer_pool.c | 711 >> > +++++++++------------ >> > platform/linux-generic/odp_packet.c | 41 +- >> > platform/linux-generic/odp_queue.c | 1 + >> > platform/linux-generic/odp_schedule.c | 20 +- >> > platform/linux-generic/odp_timer.c | 3 +- >> > test/api_test/odp_timer_ping.c | 19 +- >> > test/validation/odp_crypto.c | 43 +- >> > test/validation/odp_queue.c | 19 +- >> > 24 files changed, 1024 insertions(+), 762 deletions(-) >> > create mode 100644 platform/linux-generic/include/odp_buffer_inlines.h >> > >> >> [...] >> >> > diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h >> > b/platform/linux-generic/include/api/odp_buffer_pool.h >> > index 30b83e0..7022daa 100644 >> > --- a/platform/linux-generic/include/api/odp_buffer_pool.h >> > +++ b/platform/linux-generic/include/api/odp_buffer_pool.h >> > @@ -36,32 +36,101 @@ extern "C" { >> > #define ODP_BUFFER_POOL_INVALID 0 >> > >> > /** >> > + * Buffer pool parameters >> > + * Used to communicate buffer pool creation options. >> > + */ >> > +typedef struct odp_buffer_pool_param_t { >> > + size_t buf_size; /**< Buffer size in bytes. The maximum >> > + number of bytes application will >> >> "...bytes the application..." > > > The definite article is optional in english grammar here. This level of > nit-picking isn't > needed. yes, its a nit that you can fix when you sen version 5 or whatever version you will send out. > >> >> >> > + store in each buffer. */ >> > + size_t buf_align; /**< Minimum buffer alignment in bytes. >> > + Valid values are powers of two. Use 0 >> > + for default alignment. Default will >> > + always be a multiple of 8. */ >> > + uint32_t num_bufs; /**< Number of buffers in the pool */ >> > + int buf_type; /**< Buffer type */ >> > +} odp_buffer_pool_param_t; >> > + >> > +/** >> > * Create a buffer pool >> > + * This routine is used to create a buffer pool. It take three >> > + * arguments: the optional name of the pool to be created, an optional >> > shared >> > + * memory handle, and a parameter struct that describes the pool to be >> > + * created. If a name is not specified the result is an anonymous pool >> > that >> > + * cannot be referenced by odp_buffer_pool_lookup(). >> > * >> > - * @param name Name of the pool (max ODP_BUFFER_POOL_NAME_LEN - 1 >> > chars) >> > - * @param base_addr Pool base address >> > - * @param size Pool size in bytes >> > - * @param buf_size Buffer size in bytes >> > - * @param buf_align Minimum buffer alignment >> > - * @param buf_type Buffer type >> > + * @param[in] name Name of the pool, max ODP_BUFFER_POOL_NAME_LEN-1 >> > chars. >> > + * May be specified as NULL for anonymous pools. >> > * >> > - * @return Buffer pool handle >> > + * @param[in] shm The shared memory object in which to create the >> > pool. >> > + * Use ODP_SHM_NULL to reserve default memory type >> > + * for the buffer type. >> > + * >> > + * @param[in] params Buffer pool parameters. >> > + * >> > + * @return Buffer pool handle or ODP_BUFFER_POOL_INVALID if call >> > failed. >> >> Should be >> @retval Buffer pool handle on success >> @retval ODP_BUFFER_POOL_INVALID if call failed 1 (if it can fail list the >> reasons) >> @retval ODP_BUFFER_POOL_INVALID if call failed 2 (if it can fail list the >> reasons) >> @retval ODP_BUFFER_POOL_INVALID if call failed N > > > The documentation is consistent with that used in the rest of the file. If > we want a doc cleanup patch > that should be a separate patch and cover the whole file, not just one > routine that would otherwise stand > out as an anomaly. I'll be happy to write that after this patch gets > merged. Wasn't this a "ODP buffer pool restructure" patch, I would say that this goes under restructure and or maybe it goes under a new patch "api: change odp_buffer_pool_create" =) > >> >> >> > */ >> > + >> > odp_buffer_pool_t odp_buffer_pool_create(const char *name, >> > - void *base_addr, uint64_t size, >> > - size_t buf_size, size_t >> > buf_align, >> > - int buf_type); >> > + odp_shm_t shm, >> > + odp_buffer_pool_param_t *params); >> > >> > +/** >> > + * Destroy a buffer pool previously created by odp_buffer_pool_create() >> > + * >> > + * @param[in] pool Handle of the buffer pool to be destroyed >> > + * >> > + * @return 0 on Success, -1 on Failure. >> >> use @retval here as well and list the reasons how it can fail.] > > > Same comment as above. I'm going to copy you here: "Same comment as above." =) > >> >> >> > + * >> > + * @note This routine destroys a previously created buffer pool. This >> > call >> > + * does not destroy any shared memory object passed to >> > + * odp_buffer_pool_create() used to store the buffer pool contents. The >> > caller >> > + * takes responsibility for that. If no shared memory object was passed >> > as >> > + * part of the create call, then this routine will destroy any internal >> > shared >> > + * memory objects associated with the buffer pool. Results are >> > undefined if >> > + * an attempt is made to destroy a buffer pool that contains allocated >> > or >> > + * otherwise active buffers. >> > + */ >> > +int odp_buffer_pool_destroy(odp_buffer_pool_t pool); >> >> This doesn't belong in this patch, belongs in the >> odp_buffer_pool_destroy patch. >> > > That patch is for the implementation of the function, as described. This is > benign here. > >> >> > >> > /** >> > * Find a buffer pool by name >> > * >> > - * @param name Name of the pool >> > + * @param[in] name Name of the pool >> > * >> > * @return Buffer pool handle, or ODP_BUFFER_POOL_INVALID if not found. >> >> Fix this. > > > Same comments as above. > >> >> >> > + * >> > + * @note This routine cannot be used to look up an anonymous pool (one >> > created >> > + * with no name). >> >> How can I delete an anonymous pool? > > > You can't. This is just implementing what's been specified. If we want to > change the spec > that can be addressed in a follow-on patch. Ok I didn't know thank you for the explanation. > >> >> >> > */ >> > odp_buffer_pool_t odp_buffer_pool_lookup(const char *name); >> > >> > +/** >> > + * Buffer pool information struct >> > + * Used to get information about a buffer pool. >> > + */ >> > +typedef struct odp_buffer_pool_info_t { >> > + const char *name; /**< pool name */ >> > + odp_buffer_pool_param_t params; /**< pool parameters */ >> > +} odp_buffer_pool_info_t; >> > + >> > +/** >> > + * Retrieve information about a buffer pool >> > + * >> > + * @param[in] pool Buffer pool handle >> > + * >> > + * @param[out] shm Recieves odp_shm_t supplied by caller at >> > + * pool creation, or ODP_SHM_NULL if the >> > + * pool is managed internally. >> > + * >> > + * @param[out] info Receives an odp_buffer_pool_info_t object >> > + * that describes the pool. >> > + * >> > + * @return 0 on success, -1 if info could not be retrieved. >> >> Fix > > > Same doc comments as above. > >> >> >> > + */ >> > + >> > +int odp_buffer_pool_info(odp_buffer_pool_t pool, odp_shm_t *shm, >> > + odp_buffer_pool_info_t *info); >> >> This doesn't belong in this patch, belongs in the >> odp_buffer_pool_info patch. >> >> Again, the separate patch implements these functions. These are benign. > > >> >> > >> > /** >> > * Print buffer pool info >> > diff --git a/platform/linux-generic/include/api/odp_config.h >> > b/platform/linux-generic/include/api/odp_config.h >> > index 906897c..1226d37 100644 >> > --- a/platform/linux-generic/include/api/odp_config.h >> > +++ b/platform/linux-generic/include/api/odp_config.h >> > @@ -49,6 +49,16 @@ extern "C" { >> > #define ODP_CONFIG_PKTIO_ENTRIES 64 >> > >> > /** >> > + * Segment size to use - >> >> What does "-" mean? >> Can you elaborate more on this? > > > It's a stray character. gah, I'm sorry for beeing unclear. I meant "-" remove! and can you elaborate more and not only say "Segment size to use". > >> >> >> > + */ >> > +#define ODP_CONFIG_BUF_SEG_SIZE (512*3) >> > + >> > +/** >> > + * Maximum buffer size supported >> > + */ >> > +#define ODP_CONFIG_BUF_MAX_SIZE (ODP_CONFIG_BUF_SEG_SIZE*7) >> >> Isn't this platform specific? > > > Yes, and this is platform/linux-generic. I've chosen this for now because > the current linux-generic > packet I/O doesn't support scatter/gather reads/writes. Bill I know this is linux-generic, I was unclear again. Why do you place this in odp_config.h and not in odp_platform_types.h? > >> >> >> > + >> > +/** >> > * @} >> > */ >> > >> > diff --git a/platform/linux-generic/include/api/odp_platform_types.h >> > b/platform/linux-generic/include/api/odp_platform_types.h >> > index 4db47d3..b9b3aea 100644 >> > --- a/platform/linux-generic/include/api/odp_platform_types.h >> > +++ b/platform/linux-generic/include/api/odp_platform_types.h >> > @@ -65,6 +65,15 @@ typedef uint32_t odp_pktio_t; >> > #define ODP_PKTIO_ANY ((odp_pktio_t)~0) >> > >> > /** >> > + * ODP shared memory block >> > + */ >> > +typedef uint32_t odp_shm_t; >> > + >> > +/** Invalid shared memory block */ >> > +#define ODP_SHM_INVALID 0 >> > +#define ODP_SHM_NULL ODP_SHM_INVALID /**< Synonym for buffer pool use >> > */ >> >> ODP_SHM_* touches shm functionality and should be in its own patch to >> fix/move it. > > > Already discussed above. >> >> >> > + >> > +/** >> > * @} >> > */ >> > >> > diff --git a/platform/linux-generic/include/api/odp_shared_memory.h >> > b/platform/linux-generic/include/api/odp_shared_memory.h >> > index 26e208b..f70db5a 100644 >> > --- a/platform/linux-generic/include/api/odp_shared_memory.h >> > +++ b/platform/linux-generic/include/api/odp_shared_memory.h >> > @@ -20,6 +20,7 @@ extern "C" { >> > >> > >> > #include <odp_std_types.h> >> > +#include <odp_platform_types.h> >> >> Not relevant for the odp_buffer_pool_create > > > Incorrect. It is part of the restructure for reasons discussed above. OK, for restructuring but not for odp_buffer_pool_create =) > >> >> >> > >> > /** @defgroup odp_shared_memory ODP SHARED MEMORY >> > * Operations on shared memory. >> > @@ -38,15 +39,6 @@ extern "C" { >> > #define ODP_SHM_PROC 0x2 /**< Share with external processes */ >> > >> > /** >> > - * ODP shared memory block >> > - */ >> > -typedef uint32_t odp_shm_t; >> > - >> > -/** Invalid shared memory block */ >> > -#define ODP_SHM_INVALID 0 >> > - >> > - >> > -/** >> > * Shared memory block info >> > */ >> > typedef struct odp_shm_info_t { >> > diff --git a/platform/linux-generic/include/odp_buffer_inlines.h >> > b/platform/linux-generic/include/odp_buffer_inlines.h >> > new file mode 100644 >> > index 0000000..f33b41d >> > --- /dev/null >> > +++ b/platform/linux-generic/include/odp_buffer_inlines.h >> > @@ -0,0 +1,157 @@ >> > +/* Copyright (c) 2014, Linaro Limited >> > + * All rights reserved. >> > + * >> > + * SPDX-License-Identifier: BSD-3-Clause >> > + */ >> > + >> > +/** >> > + * @file >> > + * >> > + * Inline functions for ODP buffer mgmt routines - implementation >> > internal >> > + */ >> > + >> > +#ifndef ODP_BUFFER_INLINES_H_ >> > +#define ODP_BUFFER_INLINES_H_ >> > + >> > +#ifdef __cplusplus >> > +extern "C" { >> > +#endif >> > + >> > +static inline odp_buffer_t odp_buffer_encode_handle(odp_buffer_hdr_t >> > *hdr) >> > +{ >> > + odp_buffer_bits_t handle; >> > + uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl); >> > + struct pool_entry_s *pool = get_pool_entry(pool_id); >> > + >> > + handle.pool_id = pool_id; >> > + handle.index = ((uint8_t *)hdr - pool->pool_base_addr) / >> > + ODP_CACHE_LINE_SIZE; >> > + handle.seg = 0; >> > + >> > + return handle.u32; >> > +} >> > + >> > +static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) >> > +{ >> > + odp_buffer_t hdl = odp_buffer_encode_handle(hdr); >> > + if (hdl != hdr->handle.handle) { >> > + ODP_DBG("buf %p should have handle %x but is cached as >> > %x\n", >> > + hdr, hdl, hdr->handle.handle); >> > + hdr->handle.handle = hdl; >> > + } >> > + return hdr->handle.handle; >> > +} >> > + >> > +static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) >> > +{ >> > + odp_buffer_bits_t handle; >> > + uint32_t pool_id; >> > + uint32_t index; >> > + struct pool_entry_s *pool; >> > + >> > + handle.u32 = buf; >> > + pool_id = handle.pool_id; >> > + index = handle.index; >> > + >> > +#ifdef POOL_ERROR_CHECK >> > + if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { >> > + ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); >> > + return NULL; >> > + } >> > +#endif >> > + >> > + pool = get_pool_entry(pool_id); >> > + >> > +#ifdef POOL_ERROR_CHECK >> > + if (odp_unlikely(index > pool->params.num_bufs - 1)) { >> > + ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); >> > + return NULL; >> > + } >> > +#endif >> > + >> > + return (odp_buffer_hdr_t *)(void *) >> > + (pool->pool_base_addr + (index * ODP_CACHE_LINE_SIZE)); >> > +} >> > + >> > +static inline uint32_t odp_buffer_refcount(odp_buffer_hdr_t *buf) >> > +{ >> > + return odp_atomic_load_u32(&buf->ref_count); >> > +} >> > + >> > +static inline uint32_t odp_buffer_incr_refcount(odp_buffer_hdr_t *buf, >> > + uint32_t val) >> > +{ >> > + return odp_atomic_fetch_add_u32(&buf->ref_count, val) + val; >> > +} >> > + >> > +static inline uint32_t odp_buffer_decr_refcount(odp_buffer_hdr_t *buf, >> > + uint32_t val) >> > +{ >> > + uint32_t tmp; >> > + >> > + tmp = odp_atomic_fetch_sub_u32(&buf->ref_count, val); >> > + >> > + if (tmp < val) { >> > + odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); >> > + return 0; >> > + } else { >> >> drop the else statement > > > That would be erroneous code. Refcounts don't go below 0. This code > ensures that. Bill, I was unclear again. I thought you understood that I meant only remove "else" and move out return on tab! like this: if (tmp < val) { odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); return 0; } return tmp - val; > >> >> >> > + return tmp - val; >> > + } >> > +} >> > + >> > +static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) >> > +{ >> > + odp_buffer_bits_t handle; >> > + odp_buffer_hdr_t *buf_hdr; >> > + handle.u32 = buf; >> > + >> > + /* For buffer handles, segment index must be 0 */ >> >> Why does the buffer handle always have to have a segment index that must >> be 0? > > > Because that's how I've defined it in this implementation. > validate_buffer() can be > given any 32-bit value and it will robustly say whether or not it is a valid > buffer handle. hmm... OK, I will look again > >> >> >> > + if (handle.seg != 0) >> > + return NULL; >> >> Why do we need to check everything? >> shouldn't we trust our internal stuff to be sent correctly? >> Maybe it should be an ODP_ASSERT? > > > No, odp_buffer_is_valid() does not assert. It returns a yes/no value for > any > input value. > >> >> >> > + >> > + pool_entry_t *pool = odp_pool_to_entry(handle.pool_id); >> > + >> > + /* If pool not created, handle is invalid */ >> > + if (pool->s.pool_shm == ODP_SHM_INVALID) >> > + return NULL; >> >> The same applies here. > > > Same answer. > >> >> >> > + >> > + uint32_t buf_stride = pool->s.buf_stride / ODP_CACHE_LINE_SIZE; >> > + >> > + /* A valid buffer index must be on stride, and must be in range */ >> > + if ((handle.index % buf_stride != 0) || >> > + ((uint32_t)(handle.index / buf_stride) >= >> > pool->s.params.num_bufs)) >> > + return NULL; >> > + >> > + buf_hdr = (odp_buffer_hdr_t *)(void *) >> > + (pool->s.pool_base_addr + >> > + (handle.index * ODP_CACHE_LINE_SIZE)); >> > + >> > + /* Handle is valid, so buffer is valid if it is allocated */ >> > + if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) >> > + return NULL; >> > + else >> >> Drop the else > > > No, that would be erroneous. A buffer handle is no longer valid if > the buffer has been freed. That's what's being checked here. again: /* Handle is valid, so buffer is valid if it is allocated */ if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) return NULL; return buf_hdr; > >> >> >> > + return buf_hdr; >> > +} >> > + >> > +int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); >> > + >> > +static inline void *buffer_map(odp_buffer_hdr_t *buf, >> > + size_t offset, >> > + size_t *seglen, >> > + size_t limit) >> > +{ >> > + int seg_index = offset / buf->segsize; >> > + int seg_offset = offset % buf->segsize; >> > + size_t buf_left = limit - offset; >> > + >> > + *seglen = buf_left < buf->segsize ? >> > + buf_left : buf->segsize - seg_offset; >> > + >> > + return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]); >> > +} >> > + >> > +#ifdef __cplusplus >> > +} >> > +#endif >> > + >> > +#endif >> > diff --git a/platform/linux-generic/include/odp_buffer_internal.h >> > b/platform/linux-generic/include/odp_buffer_internal.h >> > index 0027bfc..29666db 100644 >> > --- a/platform/linux-generic/include/odp_buffer_internal.h >> > +++ b/platform/linux-generic/include/odp_buffer_internal.h >> > @@ -24,99 +24,118 @@ extern "C" { >> > #include <odp_buffer.h> >> > #include <odp_debug.h> >> > #include <odp_align.h> >> > - >> > -/* TODO: move these to correct files */ >> > - >> > -typedef uint64_t odp_phys_addr_t; >> > - >> > -#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) >> > -#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) >> > - >> > -#define ODP_BUFS_PER_CHUNK 16 >> > -#define ODP_BUFS_PER_SCATTER 4 >> > - >> > -#define ODP_BUFFER_TYPE_CHUNK 0xffff >> > - >> > +#include <odp_config.h> >> > +#include <odp_byteorder.h> >> > +#include <odp_thread.h> >> > + >> > + >> > +#define ODP_BUFFER_MAX_SEG >> > (ODP_CONFIG_BUF_MAX_SIZE/ODP_CONFIG_BUF_SEG_SIZE) >> > +#define ODP_MAX_INLINE_BUF (sizeof(void *) * (ODP_BUFFER_MAX_SEG - >> > 1)) >> > + >> > +ODP_STATIC_ASSERT((ODP_CONFIG_BUF_SEG_SIZE % ODP_CACHE_LINE_SIZE) == 0, >> > + "ODP Segment size must be a multiple of cache line >> > size"); >> > + >> > +#define ODP_SEGBITS(x) \ >> > + ((x) < 2 ? 1 : \ >> > + ((x) < 4 ? 2 : \ >> > + ((x) < 8 ? 3 : \ >> > + ((x) < 16 ? 4 : \ >> > + ((x) < 32 ? 5 : \ >> > + ((x) < 64 ? 6 : \ >> >> Do you need to add the tab "6 :<tab>\" > > > I'm not sure I understand the comment. fix your editor please! > >> >> >> > + ((x) < 128 ? 7 : \ >> > + ((x) < 256 ? 8 : \ >> > + ((x) < 512 ? 9 : \ >> > + ((x) < 1024 ? 10 : \ >> > + ((x) < 2048 ? 11 : \ >> > + ((x) < 4096 ? 12 : \ >> > + (0/0))))))))))))) >> > + >> > +ODP_STATIC_ASSERT(ODP_SEGBITS(ODP_BUFFER_MAX_SEG) < >> > + ODP_SEGBITS(ODP_CACHE_LINE_SIZE), >> > + "Number of segments must not exceed log of cache line >> > size"); >> > >> > #define ODP_BUFFER_POOL_BITS 4 >> > -#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS) >> > +#define ODP_BUFFER_SEG_BITS ODP_SEGBITS(ODP_CACHE_LINE_SIZE) >> > +#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS - >> > ODP_BUFFER_SEG_BITS) >> > +#define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS + >> > ODP_BUFFER_INDEX_BITS) >> > #define ODP_BUFFER_MAX_POOLS (1 << ODP_BUFFER_POOL_BITS) >> > #define ODP_BUFFER_MAX_BUFFERS (1 << ODP_BUFFER_INDEX_BITS) >> > >> > +#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) >> > +#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) >> > + >> > typedef union odp_buffer_bits_t { >> > uint32_t u32; >> > odp_buffer_t handle; >> > >> > struct { >> > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN >> > uint32_t pool_id:ODP_BUFFER_POOL_BITS; >> > uint32_t index:ODP_BUFFER_INDEX_BITS; >> > + uint32_t seg:ODP_BUFFER_SEG_BITS; >> > +#else >> > + uint32_t seg:ODP_BUFFER_SEG_BITS; >> > + uint32_t index:ODP_BUFFER_INDEX_BITS; >> > + uint32_t pool_id:ODP_BUFFER_POOL_BITS; >> > +#endif >> >> and this will work on 64bit platforms? > > > Yes. I'm developing on a 64-bit platform. OK > >> >> >> > }; >> > -} odp_buffer_bits_t; >> > >> > + struct { >> > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN >> > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; >> > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; >> > +#else >> > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; >> > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; >> > +#endif >> > + }; >> > +} odp_buffer_bits_t; >> > >> > /* forward declaration */ >> > struct odp_buffer_hdr_t; >> > >> > - >> > -/* >> > - * Scatter/gather list of buffers >> > - */ >> > -typedef struct odp_buffer_scatter_t { >> > - /* buffer pointers */ >> > - struct odp_buffer_hdr_t *buf[ODP_BUFS_PER_SCATTER]; >> > - int num_bufs; /* num buffers */ >> > - int pos; /* position on the list */ >> > - size_t total_len; /* Total length */ >> > -} odp_buffer_scatter_t; >> > - >> > - >> > -/* >> > - * Chunk of buffers (in single pool) >> > - */ >> > -typedef struct odp_buffer_chunk_t { >> > - uint32_t num_bufs; /* num buffers */ >> > - uint32_t buf_index[ODP_BUFS_PER_CHUNK]; /* buffers */ >> > -} odp_buffer_chunk_t; >> > - >> > - >> > /* Common buffer header */ >> > typedef struct odp_buffer_hdr_t { >> > struct odp_buffer_hdr_t *next; /* next buf in a list */ >> > + int allocator; /* allocating thread id */ >> > odp_buffer_bits_t handle; /* handle */ >> > - odp_phys_addr_t phys_addr; /* physical data start >> > address */ >> > - void *addr; /* virtual data start address >> > */ >> > - uint32_t index; /* buf index in the pool */ >> > + union { >> > + uint32_t all; >> > + struct { >> > + uint32_t zeroized:1; /* Zeroize buf data on free >> > */ >> > + uint32_t hdrdata:1; /* Data is in buffer hdr */ >> > + }; >> > + } flags; >> > + int type; /* buffer type */ >> > size_t size; /* max data size */ >> > - size_t cur_offset; /* current offset */ >> > odp_atomic_u32_t ref_count; /* reference count */ >> > - odp_buffer_scatter_t scatter; /* Scatter/gather list */ >> > - int type; /* type of next header */ >> > odp_buffer_pool_t pool_hdl; /* buffer pool handle */ >> > - >> > + union { >> > + void *buf_ctx; /* user context */ >> > + void *udata_addr; /* user metadata addr */ >> > + }; >> > + size_t udata_size; /* size of user metadata */ >> > + uint32_t segcount; /* segment count */ >> > + uint32_t segsize; /* segment size */ >> > + void *addr[ODP_BUFFER_MAX_SEG]; /* block addrs >> > */ >> > } odp_buffer_hdr_t; >> > >> > -/* Ensure next header starts from 8 byte align */ >> > -ODP_STATIC_ASSERT((sizeof(odp_buffer_hdr_t) % 8) == 0, >> > "ODP_BUFFER_HDR_T__SIZE_ERROR"); >> > +typedef struct odp_buffer_hdr_stride { >> > + uint8_t >> > pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t))]; >> > +} odp_buffer_hdr_stride; >> > >> > +typedef struct odp_buf_blk_t { >> > + struct odp_buf_blk_t *next; >> > + struct odp_buf_blk_t *prev; >> > +} odp_buf_blk_t; >> > >> > /* Raw buffer header */ >> > typedef struct { >> > odp_buffer_hdr_t buf_hdr; /* common buffer header */ >> > - uint8_t buf_data[]; /* start of buffer data area */ >> > } odp_raw_buffer_hdr_t; >> > >> > - >> > -/* Chunk header */ >> > -typedef struct odp_buffer_chunk_hdr_t { >> > - odp_buffer_hdr_t buf_hdr; >> > - odp_buffer_chunk_t chunk; >> > -} odp_buffer_chunk_hdr_t; >> > - >> > - >> > -int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); >> > - >> > -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t >> > buf_src); >> > - >> > +/* Forward declarations */ >> > +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool, size_t size); >> > >> > #ifdef __cplusplus >> > } >> > diff --git a/platform/linux-generic/include/odp_buffer_pool_internal.h >> > b/platform/linux-generic/include/odp_buffer_pool_internal.h >> > index e0210bd..cd58f91 100644 >> > --- a/platform/linux-generic/include/odp_buffer_pool_internal.h >> > +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h >> > @@ -25,6 +25,35 @@ extern "C" { >> > #include <odp_hints.h> >> > #include <odp_config.h> >> > #include <odp_debug.h> >> > +#include <odp_shared_memory.h> >> > +#include <odp_atomic.h> >> > +#include <odp_atomic_internal.h> >> > +#include <string.h> >> > + >> > +/** >> > + * Buffer initialization routine prototype >> > + * >> > + * @note Routines of this type MAY be passed as part of the >> > + * _odp_buffer_pool_init_t structure to be called whenever a >> > + * buffer is allocated to initialize the user metadata >> > + * associated with that buffer. >> > + */ >> > +typedef void (_odp_buf_init_t)(odp_buffer_t buf, void *buf_init_arg); >> > + >> > +/** >> > + * Buffer pool initialization parameters >> > + * >> > + * @param[in] udata_size Size of the user metadata for each buffer >> > + * @param[in] buf_init Function pointer to be called to >> > initialize the >> > + * user metadata for each buffer in the pool. >> > + * @param[in] buf_init_arg Argument to be passed to buf_init(). >> > + * >> > + */ >> > +typedef struct _odp_buffer_pool_init_t { >> > + size_t udata_size; /**< Size of user metadata for each >> > buffer */ >> > + _odp_buf_init_t *buf_init; /**< Buffer initialization routine to >> > use */ >> > + void *buf_init_arg; /**< Argument to be passed to >> > buf_init() */ >> > +} _odp_buffer_pool_init_t; /**< Type of buffer initialization >> > struct */ >> > >> > /* Use ticketlock instead of spinlock */ >> > #define POOL_USE_TICKETLOCK >> > @@ -39,6 +68,17 @@ extern "C" { >> > #include <odp_spinlock.h> >> > #endif >> > >> > +#ifdef POOL_USE_TICKETLOCK >> > +#include <odp_ticketlock.h> >> > +#define LOCK(a) odp_ticketlock_lock(a) >> > +#define UNLOCK(a) odp_ticketlock_unlock(a) >> > +#define LOCK_INIT(a) odp_ticketlock_init(a) >> > +#else >> > +#include <odp_spinlock.h> >> > +#define LOCK(a) odp_spinlock_lock(a) >> > +#define UNLOCK(a) odp_spinlock_unlock(a) >> > +#define LOCK_INIT(a) odp_spinlock_init(a) >> > +#endif >> > >> > struct pool_entry_s { >> > #ifdef POOL_USE_TICKETLOCK >> > @@ -47,66 +87,224 @@ struct pool_entry_s { >> > odp_spinlock_t lock ODP_ALIGNED_CACHE; >> > #endif >> > >> > - odp_buffer_chunk_hdr_t *head; >> > - uint64_t free_bufs; >> > char name[ODP_BUFFER_POOL_NAME_LEN]; >> > - >> > - odp_buffer_pool_t pool_hdl ODP_ALIGNED_CACHE; >> > - uintptr_t buf_base; >> > - size_t buf_size; >> > - size_t buf_offset; >> > - uint64_t num_bufs; >> > - void *pool_base_addr; >> > - uint64_t pool_size; >> > - size_t user_size; >> > - size_t user_align; >> > - int buf_type; >> > - size_t hdr_size; >> > + odp_buffer_pool_param_t params; >> > + _odp_buffer_pool_init_t init_params; >> > + odp_buffer_pool_t pool_hdl; >> > + odp_shm_t pool_shm; >> > + union { >> > + uint32_t all; >> > + struct { >> > + uint32_t has_name:1; >> > + uint32_t user_supplied_shm:1; >> > + uint32_t unsegmented:1; >> > + uint32_t zeroized:1; >> > + uint32_t quiesced:1; >> > + uint32_t low_wm_assert:1; >> > + uint32_t predefined:1; >> > + }; >> > + } flags; >> > + uint8_t *pool_base_addr; >> > + size_t pool_size; >> > + uint32_t buf_stride; >> > + _odp_atomic_ptr_t buf_freelist; >> > + _odp_atomic_ptr_t blk_freelist; >> > + odp_atomic_u32_t bufcount; >> > + odp_atomic_u32_t blkcount; >> > + odp_atomic_u64_t bufallocs; >> > + odp_atomic_u64_t buffrees; >> > + odp_atomic_u64_t blkallocs; >> > + odp_atomic_u64_t blkfrees; >> > + odp_atomic_u64_t bufempty; >> > + odp_atomic_u64_t blkempty; >> > + odp_atomic_u64_t high_wm_count; >> > + odp_atomic_u64_t low_wm_count; >> > + size_t seg_size; >> > + size_t high_wm; >> > + size_t low_wm; >> > + size_t headroom; >> > + size_t tailroom; >> >> General comment add the same level of information into the variable >> names. >> >> Not consistent use "_" used to separate words in variable names. >> > > These are internal structs. Not relevant. so you mean that we shouldn't review internal code and that its OK to bi inconsistent because its internal code? > >> >> >> >> > }; >> > >> > +typedef union pool_entry_u { >> > + struct pool_entry_s s; >> > + >> > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct >> > pool_entry_s))]; >> > +} pool_entry_t; >> > >> > extern void *pool_entry_ptr[]; >> > >> > +#if defined(ODP_CONFIG_SECURE_POOLS) && (ODP_CONFIG_SECURE_POOLS == 1) >> > +#define buffer_is_secure(buf) (buf->flags.zeroized) >> > +#define pool_is_secure(pool) (pool->flags.zeroized) >> > +#else >> > +#define buffer_is_secure(buf) 0 >> > +#define pool_is_secure(pool) 0 >> > +#endif >> > + >> > +#define TAG_ALIGN ((size_t)16) >> > >> > -static inline void *get_pool_entry(uint32_t pool_id) >> > +#define odp_cs(ptr, old, new) \ >> > + _odp_atomic_ptr_cmp_xchg_strong(&ptr, (void **)&old, (void *)new, >> > \ >> > + _ODP_MEMMODEL_SC, \ >> > + _ODP_MEMMODEL_SC) >> > + >> > +/* Helper functions for pointer tagging to avoid ABA race conditions */ >> > +#define odp_tag(ptr) \ >> > + (((size_t)ptr) & (TAG_ALIGN - 1)) >> > + >> > +#define odp_detag(ptr) \ >> > + ((typeof(ptr))(((size_t)ptr) & -TAG_ALIGN)) >> > + >> > +#define odp_retag(ptr, tag) \ >> > + ((typeof(ptr))(((size_t)ptr) | odp_tag(tag))) >> > + >> > + >> > +static inline void *get_blk(struct pool_entry_s *pool) >> > { >> > - return pool_entry_ptr[pool_id]; >> > + void *oldhead, *myhead, *newhead; >> > + >> > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, >> > _ODP_MEMMODEL_ACQ); >> > + >> > + do { >> > + size_t tag = odp_tag(oldhead); >> > + myhead = odp_detag(oldhead); >> > + if (myhead == NULL) >> > + break; >> > + newhead = odp_retag(((odp_buf_blk_t *)myhead)->next, tag + >> > 1); >> > + } while (odp_cs(pool->blk_freelist, oldhead, newhead) == 0); >> > + >> > + if (myhead == NULL) { >> > + odp_atomic_inc_u64(&pool->blkempty); >> > + } else { >> > + uint64_t blkcount = >> > + odp_atomic_fetch_sub_u32(&pool->blkcount, 1); >> > + >> > + /* Check for low watermark condition */ >> > + if (blkcount == pool->low_wm) { >> > + LOCK(&pool->lock); >> > + if (blkcount <= pool->low_wm && >> > + !pool->flags.low_wm_assert) { >> > + pool->flags.low_wm_assert = 1; >> > + odp_atomic_inc_u64(&pool->low_wm_count); >> > + } >> > + UNLOCK(&pool->lock); >> > + } >> > + odp_atomic_inc_u64(&pool->blkallocs); >> > + } >> > + >> > + return (void *)myhead; >> > } >> > >> > +static inline void ret_blk(struct pool_entry_s *pool, void *block) >> > +{ >> > + void *oldhead, *myhead, *myblock; >> > + >> > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, >> > _ODP_MEMMODEL_ACQ); >> > >> > -static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) >> > + do { >> > + size_t tag = odp_tag(oldhead); >> > + myhead = odp_detag(oldhead); >> > + ((odp_buf_blk_t *)block)->next = myhead; >> > + myblock = odp_retag(block, tag + 1); >> > + } while (odp_cs(pool->blk_freelist, oldhead, myblock) == 0); >> > + >> > + odp_atomic_inc_u64(&pool->blkfrees); >> > + uint64_t blkcount = odp_atomic_fetch_add_u32(&pool->blkcount, 1); >> >> Move uint64_t up with next to all the other globaly declared variables >> for this function. > > > These are not global variables. Move the declaration to the top of this function next to the "void *oldhead,...." > >> >> >> >> Some comments to start with. =) >> >> Cheers, >> Anders > >
Segment-level headroom/tailroom is not part of this patch since Petri has stated that we won't be doing that for v1.0. It can be added at a later date if we decide we need it. On Wed, Dec 3, 2014 at 2:25 AM, Bala Manoharan <bala.manoharan@linaro.org> wrote: > I have only a few minor comments for this patch. > One major query is that Currently this patch does not integrate > headroom/tailroom at segment level > is the feature available in 1.0? > if it is needed to be supported then we need some modifications in segment > handling. > > Regards, > Bala > On Tue, Dec 02, 2014 at 01:17:01PM -0600, Bill Fischofer wrote: > > Restructure ODP buffer pool internals to support new APIs. > > Implements new odp_buffer_pool_create() API. > > > > Signed-off-by: Bill Fischofer <bill.fischofer@linaro.org> > > --- > > example/generator/odp_generator.c | 19 +- > > example/ipsec/odp_ipsec.c | 57 +- > > example/l2fwd/odp_l2fwd.c | 19 +- > > example/odp_example/odp_example.c | 18 +- > > example/packet/odp_pktio.c | 19 +- > > example/timer/odp_timer_test.c | 13 +- > > .../linux-generic/include/api/odp_buffer_pool.h | 91 ++- > > platform/linux-generic/include/api/odp_config.h | 10 + > > .../linux-generic/include/api/odp_platform_types.h | 9 + > > .../linux-generic/include/api/odp_shared_memory.h | 10 +- > > .../linux-generic/include/odp_buffer_inlines.h | 157 +++++ > > .../linux-generic/include/odp_buffer_internal.h | 137 ++-- > > .../include/odp_buffer_pool_internal.h | 278 ++++++-- > > .../linux-generic/include/odp_packet_internal.h | 50 +- > > .../linux-generic/include/odp_timer_internal.h | 11 +- > > platform/linux-generic/odp_buffer.c | 31 +- > > platform/linux-generic/odp_buffer_pool.c | 711 > +++++++++------------ > > platform/linux-generic/odp_packet.c | 41 +- > > platform/linux-generic/odp_queue.c | 1 + > > platform/linux-generic/odp_schedule.c | 20 +- > > platform/linux-generic/odp_timer.c | 3 +- > > test/api_test/odp_timer_ping.c | 19 +- > > test/validation/odp_crypto.c | 43 +- > > test/validation/odp_queue.c | 19 +- > > 24 files changed, 1024 insertions(+), 762 deletions(-) > > create mode 100644 platform/linux-generic/include/odp_buffer_inlines.h > > > > diff --git a/example/generator/odp_generator.c > b/example/generator/odp_generator.c > > index 73b0369..476cbef 100644 > > --- a/example/generator/odp_generator.c > > +++ b/example/generator/odp_generator.c > > @@ -522,11 +522,11 @@ int main(int argc, char *argv[]) > > odph_linux_pthread_t thread_tbl[MAX_WORKERS]; > > odp_buffer_pool_t pool; > > int num_workers; > > - void *pool_base; > > int i; > > int first_core; > > int core_count; > > odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > /* Init ODP before calling anything else */ > > if (odp_init_global(NULL, NULL)) { > > @@ -589,20 +589,13 @@ int main(int argc, char *argv[]) > > printf("First core: %i\n\n", first_core); > > > > /* Create packet pool */ > > - shm = odp_shm_reserve("shm_packet_pool", > > - SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > - pool_base = odp_shm_addr(shm); > > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > > > - if (pool_base == NULL) { > > - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); > > - exit(EXIT_FAILURE); > > - } > > + pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, > ¶ms); > > > > - pool = odp_buffer_pool_create("packet_pool", pool_base, > > - SHM_PKT_POOL_SIZE, > > - SHM_PKT_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_PACKET); > > if (pool == ODP_BUFFER_POOL_INVALID) { > > EXAMPLE_ERR("Error: packet pool create failed.\n"); > > exit(EXIT_FAILURE); > > diff --git a/example/ipsec/odp_ipsec.c b/example/ipsec/odp_ipsec.c > > index 76d27c5..f96338c 100644 > > --- a/example/ipsec/odp_ipsec.c > > +++ b/example/ipsec/odp_ipsec.c > > @@ -367,8 +367,7 @@ static > > void ipsec_init_pre(void) > > { > > odp_queue_param_t qparam; > > - void *pool_base; > > - odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > /* > > * Create queues > > @@ -401,16 +400,12 @@ void ipsec_init_pre(void) > > } > > > > /* Create output buffer pool */ > > - shm = odp_shm_reserve("shm_out_pool", > > - SHM_OUT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > - > > - pool_base = odp_shm_addr(shm); > > + params.buf_size = SHM_OUT_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_PKT_POOL_BUF_COUNT; > > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > > > - out_pool = odp_buffer_pool_create("out_pool", pool_base, > > - SHM_OUT_POOL_SIZE, > > - SHM_OUT_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_PACKET); > > + out_pool = odp_buffer_pool_create("out_pool", ODP_SHM_NULL, > ¶ms); > > > > if (ODP_BUFFER_POOL_INVALID == out_pool) { > > EXAMPLE_ERR("Error: message pool create failed.\n"); > > @@ -1176,12 +1171,12 @@ main(int argc, char *argv[]) > > { > > odph_linux_pthread_t thread_tbl[MAX_WORKERS]; > > int num_workers; > > - void *pool_base; > > int i; > > int first_core; > > int core_count; > > int stream_count; > > odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > /* Init ODP before calling anything else */ > > if (odp_init_global(NULL, NULL)) { > > @@ -1241,42 +1236,28 @@ main(int argc, char *argv[]) > > printf("First core: %i\n\n", first_core); > > > > /* Create packet buffer pool */ > > - shm = odp_shm_reserve("shm_packet_pool", > > - SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_PKT_POOL_BUF_COUNT; > > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > > > - pool_base = odp_shm_addr(shm); > > - > > - if (NULL == pool_base) { > > - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); > > - exit(EXIT_FAILURE); > > - } > > + pkt_pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, > > + ¶ms); > > > > - pkt_pool = odp_buffer_pool_create("packet_pool", pool_base, > > - SHM_PKT_POOL_SIZE, > > - SHM_PKT_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_PACKET); > > if (ODP_BUFFER_POOL_INVALID == pkt_pool) { > > EXAMPLE_ERR("Error: packet pool create failed.\n"); > > exit(EXIT_FAILURE); > > } > > > > /* Create context buffer pool */ > > - shm = odp_shm_reserve("shm_ctx_pool", > > - SHM_CTX_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > - > > - pool_base = odp_shm_addr(shm); > > + params.buf_size = SHM_CTX_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_CTX_POOL_BUF_COUNT; > > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > > > - if (NULL == pool_base) { > > - EXAMPLE_ERR("Error: context pool mem alloc failed.\n"); > > - exit(EXIT_FAILURE); > > - } > > + ctx_pool = odp_buffer_pool_create("ctx_pool", ODP_SHM_NULL, > > + ¶ms); > > > > - ctx_pool = odp_buffer_pool_create("ctx_pool", pool_base, > > - SHM_CTX_POOL_SIZE, > > - SHM_CTX_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_RAW); > > if (ODP_BUFFER_POOL_INVALID == ctx_pool) { > > EXAMPLE_ERR("Error: context pool create failed.\n"); > > exit(EXIT_FAILURE); > > diff --git a/example/l2fwd/odp_l2fwd.c b/example/l2fwd/odp_l2fwd.c > > index ebac8c5..3c1fd6a 100644 > > --- a/example/l2fwd/odp_l2fwd.c > > +++ b/example/l2fwd/odp_l2fwd.c > > @@ -314,12 +314,12 @@ int main(int argc, char *argv[]) > > { > > odph_linux_pthread_t thread_tbl[MAX_WORKERS]; > > odp_buffer_pool_t pool; > > - void *pool_base; > > int i; > > int first_core; > > int core_count; > > odp_pktio_t pktio; > > odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > /* Init ODP before calling anything else */ > > if (odp_init_global(NULL, NULL)) { > > @@ -383,20 +383,13 @@ int main(int argc, char *argv[]) > > printf("First core: %i\n\n", first_core); > > > > /* Create packet pool */ > > - shm = odp_shm_reserve("shm_packet_pool", > > - SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > - pool_base = odp_shm_addr(shm); > > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > > > - if (pool_base == NULL) { > > - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); > > - exit(EXIT_FAILURE); > > - } > > + pool = odp_buffer_pool_create("packet pool", ODP_SHM_NULL, > ¶ms); > > > > - pool = odp_buffer_pool_create("packet_pool", pool_base, > > - SHM_PKT_POOL_SIZE, > > - SHM_PKT_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_PACKET); > > if (pool == ODP_BUFFER_POOL_INVALID) { > > EXAMPLE_ERR("Error: packet pool create failed.\n"); > > exit(EXIT_FAILURE); > > diff --git a/example/odp_example/odp_example.c > b/example/odp_example/odp_example.c > > index 96a2912..8373f12 100644 > > --- a/example/odp_example/odp_example.c > > +++ b/example/odp_example/odp_example.c > > @@ -954,13 +954,13 @@ int main(int argc, char *argv[]) > > test_args_t args; > > int num_workers; > > odp_buffer_pool_t pool; > > - void *pool_base; > > odp_queue_t queue; > > int i, j; > > int prios; > > int first_core; > > odp_shm_t shm; > > test_globals_t *globals; > > + odp_buffer_pool_param_t params; > > > > printf("\nODP example starts\n\n"); > > > > @@ -1042,19 +1042,13 @@ int main(int argc, char *argv[]) > > /* > > * Create message pool > > */ > > - shm = odp_shm_reserve("msg_pool", > > - MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > > > - pool_base = odp_shm_addr(shm); > > + params.buf_size = sizeof(test_message_t); > > + params.buf_align = 0; > > + params.num_bufs = MSG_POOL_SIZE/sizeof(test_message_t); > > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > > > - if (pool_base == NULL) { > > - EXAMPLE_ERR("Shared memory reserve failed.\n"); > > - return -1; > > - } > > - > > - pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, > > - sizeof(test_message_t), > > - ODP_CACHE_LINE_SIZE, > ODP_BUFFER_TYPE_RAW); > > + pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, ¶ms); > > > > if (pool == ODP_BUFFER_POOL_INVALID) { > > EXAMPLE_ERR("Pool create failed.\n"); > > diff --git a/example/packet/odp_pktio.c b/example/packet/odp_pktio.c > > index 1763c84..27318d4 100644 > > --- a/example/packet/odp_pktio.c > > +++ b/example/packet/odp_pktio.c > > @@ -331,11 +331,11 @@ int main(int argc, char *argv[]) > > odph_linux_pthread_t thread_tbl[MAX_WORKERS]; > > odp_buffer_pool_t pool; > > int num_workers; > > - void *pool_base; > > int i; > > int first_core; > > int core_count; > > odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > /* Init ODP before calling anything else */ > > if (odp_init_global(NULL, NULL)) { > > @@ -389,20 +389,13 @@ int main(int argc, char *argv[]) > > printf("First core: %i\n\n", first_core); > > > > /* Create packet pool */ > > - shm = odp_shm_reserve("shm_packet_pool", > > - SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > - pool_base = odp_shm_addr(shm); > > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > > > - if (pool_base == NULL) { > > - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); > > - exit(EXIT_FAILURE); > > - } > > + pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, > ¶ms); > > > > - pool = odp_buffer_pool_create("packet_pool", pool_base, > > - SHM_PKT_POOL_SIZE, > > - SHM_PKT_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_PACKET); > > if (pool == ODP_BUFFER_POOL_INVALID) { > > EXAMPLE_ERR("Error: packet pool create failed.\n"); > > exit(EXIT_FAILURE); > > diff --git a/example/timer/odp_timer_test.c > b/example/timer/odp_timer_test.c > > index 9968bfe..0d6e31a 100644 > > --- a/example/timer/odp_timer_test.c > > +++ b/example/timer/odp_timer_test.c > > @@ -244,12 +244,12 @@ int main(int argc, char *argv[]) > > test_args_t args; > > int num_workers; > > odp_buffer_pool_t pool; > > - void *pool_base; > > odp_queue_t queue; > > int first_core; > > uint64_t cycles, ns; > > odp_queue_param_t param; > > odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > printf("\nODP timer example starts\n"); > > > > @@ -313,12 +313,13 @@ int main(int argc, char *argv[]) > > */ > > shm = odp_shm_reserve("msg_pool", > > MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > - pool_base = odp_shm_addr(shm); > > > > - pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, > > - 0, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_TIMEOUT); > > + params.buf_size = 0; > > + params.buf_align = 0; > > + params.num_bufs = MSG_POOL_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_TIMEOUT; > > + > > + pool = odp_buffer_pool_create("msg_pool", shm, ¶ms); > > > > if (pool == ODP_BUFFER_POOL_INVALID) { > > EXAMPLE_ERR("Pool create failed.\n"); > > diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h > b/platform/linux-generic/include/api/odp_buffer_pool.h > > index 30b83e0..7022daa 100644 > > --- a/platform/linux-generic/include/api/odp_buffer_pool.h > > +++ b/platform/linux-generic/include/api/odp_buffer_pool.h > > @@ -36,32 +36,101 @@ extern "C" { > > #define ODP_BUFFER_POOL_INVALID 0 > > > > /** > > + * Buffer pool parameters > > + * Used to communicate buffer pool creation options. > > + */ > > +typedef struct odp_buffer_pool_param_t { > > + size_t buf_size; /**< Buffer size in bytes. The maximum > > + number of bytes application will > > + store in each buffer. */ > > + size_t buf_align; /**< Minimum buffer alignment in bytes. > > + Valid values are powers of two. Use 0 > > + for default alignment. Default will > > + always be a multiple of 8. */ > > + uint32_t num_bufs; /**< Number of buffers in the pool */ > > + int buf_type; /**< Buffer type */ > > +} odp_buffer_pool_param_t; > > + > > +/** > > * Create a buffer pool > > + * This routine is used to create a buffer pool. It take three > > + * arguments: the optional name of the pool to be created, an optional > shared > > + * memory handle, and a parameter struct that describes the pool to be > > + * created. If a name is not specified the result is an anonymous pool > that > > + * cannot be referenced by odp_buffer_pool_lookup(). > > * > > - * @param name Name of the pool (max ODP_BUFFER_POOL_NAME_LEN - 1 > chars) > > - * @param base_addr Pool base address > > - * @param size Pool size in bytes > > - * @param buf_size Buffer size in bytes > > - * @param buf_align Minimum buffer alignment > > - * @param buf_type Buffer type > > + * @param[in] name Name of the pool, max ODP_BUFFER_POOL_NAME_LEN-1 > chars. > > + * May be specified as NULL for anonymous pools. > > * > > - * @return Buffer pool handle > > + * @param[in] shm The shared memory object in which to create the > pool. > > + * Use ODP_SHM_NULL to reserve default memory type > > + * for the buffer type. > > + * > > + * @param[in] params Buffer pool parameters. > > + * > > + * @return Buffer pool handle or ODP_BUFFER_POOL_INVALID if call failed. > > */ > > + > > odp_buffer_pool_t odp_buffer_pool_create(const char *name, > > - void *base_addr, uint64_t size, > > - size_t buf_size, size_t buf_align, > > - int buf_type); > > + odp_shm_t shm, > > + odp_buffer_pool_param_t *params); > > > > +/** > > + * Destroy a buffer pool previously created by odp_buffer_pool_create() > > + * > > + * @param[in] pool Handle of the buffer pool to be destroyed > > + * > > + * @return 0 on Success, -1 on Failure. > > + * > > + * @note This routine destroys a previously created buffer pool. This > call > > + * does not destroy any shared memory object passed to > > + * odp_buffer_pool_create() used to store the buffer pool contents. The > caller > > + * takes responsibility for that. If no shared memory object was passed > as > > + * part of the create call, then this routine will destroy any internal > shared > > + * memory objects associated with the buffer pool. Results are > undefined if > > + * an attempt is made to destroy a buffer pool that contains allocated > or > > + * otherwise active buffers. > > + */ > > +int odp_buffer_pool_destroy(odp_buffer_pool_t pool); > > > > /** > > * Find a buffer pool by name > > * > > - * @param name Name of the pool > > + * @param[in] name Name of the pool > > * > > * @return Buffer pool handle, or ODP_BUFFER_POOL_INVALID if not found. > > + * > > + * @note This routine cannot be used to look up an anonymous pool (one > created > > + * with no name). > > */ > > odp_buffer_pool_t odp_buffer_pool_lookup(const char *name); > > > > +/** > > + * Buffer pool information struct > > + * Used to get information about a buffer pool. > > + */ > > +typedef struct odp_buffer_pool_info_t { > > + const char *name; /**< pool name */ > > + odp_buffer_pool_param_t params; /**< pool parameters */ > > +} odp_buffer_pool_info_t; > > + > > +/** > > + * Retrieve information about a buffer pool > > + * > > + * @param[in] pool Buffer pool handle > > + * > > + * @param[out] shm Recieves odp_shm_t supplied by caller at > > + * pool creation, or ODP_SHM_NULL if the > > + * pool is managed internally. > > + * > > + * @param[out] info Receives an odp_buffer_pool_info_t object > > + * that describes the pool. > > + * > > + * @return 0 on success, -1 if info could not be retrieved. > > + */ > > + > > +int odp_buffer_pool_info(odp_buffer_pool_t pool, odp_shm_t *shm, > > + odp_buffer_pool_info_t *info); > > > > /** > > * Print buffer pool info > > diff --git a/platform/linux-generic/include/api/odp_config.h > b/platform/linux-generic/include/api/odp_config.h > > index 906897c..1226d37 100644 > > --- a/platform/linux-generic/include/api/odp_config.h > > +++ b/platform/linux-generic/include/api/odp_config.h > > @@ -49,6 +49,16 @@ extern "C" { > > #define ODP_CONFIG_PKTIO_ENTRIES 64 > > > > /** > > + * Segment size to use - > > + */ > > +#define ODP_CONFIG_BUF_SEG_SIZE (512*3) > > + > > +/** > > + * Maximum buffer size supported > > + */ > > +#define ODP_CONFIG_BUF_MAX_SIZE (ODP_CONFIG_BUF_SEG_SIZE*7) > > + > > +/** > > * @} > > */ > > > > diff --git a/platform/linux-generic/include/api/odp_platform_types.h > b/platform/linux-generic/include/api/odp_platform_types.h > > index 4db47d3..b9b3aea 100644 > > --- a/platform/linux-generic/include/api/odp_platform_types.h > > +++ b/platform/linux-generic/include/api/odp_platform_types.h > > @@ -65,6 +65,15 @@ typedef uint32_t odp_pktio_t; > > #define ODP_PKTIO_ANY ((odp_pktio_t)~0) > > > > /** > > + * ODP shared memory block > > + */ > > +typedef uint32_t odp_shm_t; > > + > > +/** Invalid shared memory block */ > > +#define ODP_SHM_INVALID 0 > > +#define ODP_SHM_NULL ODP_SHM_INVALID /**< Synonym for buffer pool use */ > > + > > +/** > > * @} > > */ > > > > diff --git a/platform/linux-generic/include/api/odp_shared_memory.h > b/platform/linux-generic/include/api/odp_shared_memory.h > > index 26e208b..f70db5a 100644 > > --- a/platform/linux-generic/include/api/odp_shared_memory.h > > +++ b/platform/linux-generic/include/api/odp_shared_memory.h > > @@ -20,6 +20,7 @@ extern "C" { > > > > > > #include <odp_std_types.h> > > +#include <odp_platform_types.h> > > > > /** @defgroup odp_shared_memory ODP SHARED MEMORY > > * Operations on shared memory. > > @@ -38,15 +39,6 @@ extern "C" { > > #define ODP_SHM_PROC 0x2 /**< Share with external processes */ > > > > /** > > - * ODP shared memory block > > - */ > > -typedef uint32_t odp_shm_t; > > - > > -/** Invalid shared memory block */ > > -#define ODP_SHM_INVALID 0 > > - > > - > > -/** > > * Shared memory block info > > */ > > typedef struct odp_shm_info_t { > > diff --git a/platform/linux-generic/include/odp_buffer_inlines.h > b/platform/linux-generic/include/odp_buffer_inlines.h > > new file mode 100644 > > index 0000000..f33b41d > > --- /dev/null > > +++ b/platform/linux-generic/include/odp_buffer_inlines.h > > @@ -0,0 +1,157 @@ > > +/* Copyright (c) 2014, Linaro Limited > > + * All rights reserved. > > + * > > + * SPDX-License-Identifier: BSD-3-Clause > > + */ > > + > > +/** > > + * @file > > + * > > + * Inline functions for ODP buffer mgmt routines - implementation > internal > > + */ > > + > > +#ifndef ODP_BUFFER_INLINES_H_ > > +#define ODP_BUFFER_INLINES_H_ > > + > > +#ifdef __cplusplus > > +extern "C" { > > +#endif > > + > > +static inline odp_buffer_t odp_buffer_encode_handle(odp_buffer_hdr_t > *hdr) > > +{ > > + odp_buffer_bits_t handle; > > + uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl); > > + struct pool_entry_s *pool = get_pool_entry(pool_id); > > + > > + handle.pool_id = pool_id; > > + handle.index = ((uint8_t *)hdr - pool->pool_base_addr) / > > + ODP_CACHE_LINE_SIZE; > > + handle.seg = 0; > > + > > + return handle.u32; > > +} > > + > > +static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) > > +{ > > + odp_buffer_t hdl = odp_buffer_encode_handle(hdr); > > + if (hdl != hdr->handle.handle) { > > + ODP_DBG("buf %p should have handle %x but is cached as > %x\n", > > + hdr, hdl, hdr->handle.handle); > > + hdr->handle.handle = hdl; > > + } > > + return hdr->handle.handle; > > +} > > + > > +static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > > +{ > > + odp_buffer_bits_t handle; > > + uint32_t pool_id; > > + uint32_t index; > > + struct pool_entry_s *pool; > > + > > + handle.u32 = buf; > > + pool_id = handle.pool_id; > > + index = handle.index; > > + > > +#ifdef POOL_ERROR_CHECK > > + if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { > > + ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); > > + return NULL; > > + } > > +#endif > > + > > + pool = get_pool_entry(pool_id); > > + > > +#ifdef POOL_ERROR_CHECK > > + if (odp_unlikely(index > pool->params.num_bufs - 1)) { > > + ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); > > + return NULL; > > + } > > +#endif > > + > > + return (odp_buffer_hdr_t *)(void *) > > + (pool->pool_base_addr + (index * ODP_CACHE_LINE_SIZE)); > > +} > > + > > +static inline uint32_t odp_buffer_refcount(odp_buffer_hdr_t *buf) > > +{ > > + return odp_atomic_load_u32(&buf->ref_count); > > +} > > + > > +static inline uint32_t odp_buffer_incr_refcount(odp_buffer_hdr_t *buf, > > + uint32_t val) > > +{ > > + return odp_atomic_fetch_add_u32(&buf->ref_count, val) + val; > > +} > > + > > +static inline uint32_t odp_buffer_decr_refcount(odp_buffer_hdr_t *buf, > > + uint32_t val) > > +{ > > + uint32_t tmp; > > + > > + tmp = odp_atomic_fetch_sub_u32(&buf->ref_count, val); > > + > > + if (tmp < val) { > > + odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); > > + return 0; > > + } else { > > + return tmp - val; > > + } > > +} > > + > > +static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) > > +{ > > + odp_buffer_bits_t handle; > > + odp_buffer_hdr_t *buf_hdr; > > + handle.u32 = buf; > > + > > + /* For buffer handles, segment index must be 0 */ > > + if (handle.seg != 0) > > + return NULL; > > + > > + pool_entry_t *pool = odp_pool_to_entry(handle.pool_id); > > + > > + /* If pool not created, handle is invalid */ > > + if (pool->s.pool_shm == ODP_SHM_INVALID) > > + return NULL; > > + > > + uint32_t buf_stride = pool->s.buf_stride / ODP_CACHE_LINE_SIZE; > > + > > + /* A valid buffer index must be on stride, and must be in range */ > > + if ((handle.index % buf_stride != 0) || > > + ((uint32_t)(handle.index / buf_stride) >= > pool->s.params.num_bufs)) > > + return NULL; > > + > > + buf_hdr = (odp_buffer_hdr_t *)(void *) > > + (pool->s.pool_base_addr + > > + (handle.index * ODP_CACHE_LINE_SIZE)); > > + > > + /* Handle is valid, so buffer is valid if it is allocated */ > > + if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) > > + return NULL; > > + else > > + return buf_hdr; > > +} > > + > > +int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > > + > > +static inline void *buffer_map(odp_buffer_hdr_t *buf, > > + size_t offset, > > + size_t *seglen, > > + size_t limit) > > +{ > > + int seg_index = offset / buf->segsize; > We are currently discussing the use of headroom/tailroom per segments > if that is the case then we cannot assume the seg_index directly using the > above formula > > + int seg_offset = offset % buf->segsize; > > + size_t buf_left = limit - offset; > Maybe we need the error check for buf->total_size > offset > > + > > + *seglen = buf_left < buf->segsize ?w > > + buf_left : buf->segsize - seg_offset; > > + > > + return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]); > > +} > > + > > +#ifdef __cplusplus > > +} > > +#endif > > + > > +#endif > > diff --git a/platform/linux-generic/include/odp_buffer_internal.h > b/platform/linux-generic/include/odp_buffer_internal.h > > index 0027bfc..29666db 100644 > > --- a/platform/linux-generic/include/odp_buffer_internal.h > > +++ b/platform/linux-generic/include/odp_buffer_internal.h > > @@ -24,99 +24,118 @@ extern "C" { > > #include <odp_buffer.h> > > #include <odp_debug.h> > > #include <odp_align.h> > > - > > -/* TODO: move these to correct files */ > > - > > -typedef uint64_t odp_phys_addr_t; > > - > > -#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > > -#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > > - > > -#define ODP_BUFS_PER_CHUNK 16 > > -#define ODP_BUFS_PER_SCATTER 4 > > - > > -#define ODP_BUFFER_TYPE_CHUNK 0xffff > > - > > +#include <odp_config.h> > > +#include <odp_byteorder.h> > > +#include <odp_thread.h> > > + > > + > > +#define ODP_BUFFER_MAX_SEG > (ODP_CONFIG_BUF_MAX_SIZE/ODP_CONFIG_BUF_SEG_SIZE) > > +#define ODP_MAX_INLINE_BUF (sizeof(void *) * (ODP_BUFFER_MAX_SEG - > 1)) > > + > > +ODP_STATIC_ASSERT((ODP_CONFIG_BUF_SEG_SIZE % ODP_CACHE_LINE_SIZE) == 0, > > + "ODP Segment size must be a multiple of cache line > size"); > > + > > +#define ODP_SEGBITS(x) \ > > + ((x) < 2 ? 1 : \ > > + ((x) < 4 ? 2 : \ > > + ((x) < 8 ? 3 : \ > > + ((x) < 16 ? 4 : \ > > + ((x) < 32 ? 5 : \ > > + ((x) < 64 ? 6 : \ > > + ((x) < 128 ? 7 : \ > > + ((x) < 256 ? 8 : \ > > + ((x) < 512 ? 9 : \ > > + ((x) < 1024 ? 10 : \ > > + ((x) < 2048 ? 11 : \ > > + ((x) < 4096 ? 12 : \ > > + (0/0))))))))))))) > > + > > +ODP_STATIC_ASSERT(ODP_SEGBITS(ODP_BUFFER_MAX_SEG) < > > + ODP_SEGBITS(ODP_CACHE_LINE_SIZE), > > + "Number of segments must not exceed log of cache line > size"); > > > > #define ODP_BUFFER_POOL_BITS 4 > > -#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS) > > +#define ODP_BUFFER_SEG_BITS ODP_SEGBITS(ODP_CACHE_LINE_SIZE) > > +#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS - > ODP_BUFFER_SEG_BITS) > > +#define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS + > ODP_BUFFER_INDEX_BITS) > > #define ODP_BUFFER_MAX_POOLS (1 << ODP_BUFFER_POOL_BITS) > > #define ODP_BUFFER_MAX_BUFFERS (1 << ODP_BUFFER_INDEX_BITS) > > > > +#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > > +#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > > + > > typedef union odp_buffer_bits_t { > > uint32_t u32; > > odp_buffer_t handle; > > > > struct { > > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > > uint32_t pool_id:ODP_BUFFER_POOL_BITS; > > uint32_t index:ODP_BUFFER_INDEX_BITS; > > + uint32_t seg:ODP_BUFFER_SEG_BITS; > > +#else > > + uint32_t seg:ODP_BUFFER_SEG_BITS; > > + uint32_t index:ODP_BUFFER_INDEX_BITS; > > + uint32_t pool_id:ODP_BUFFER_POOL_BITS; > > +#endif > > }; > > -} odp_buffer_bits_t; > > > > + struct { > > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > > +#else > > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > > +#endif > > + }; > > +} odp_buffer_bits_t; > > > > /* forward declaration */ > > struct odp_buffer_hdr_t; > > > > - > > -/* > > - * Scatter/gather list of buffers > > - */ > > -typedef struct odp_buffer_scatter_t { > > - /* buffer pointers */ > > - struct odp_buffer_hdr_t *buf[ODP_BUFS_PER_SCATTER]; > > - int num_bufs; /* num buffers */ > > - int pos; /* position on the list */ > > - size_t total_len; /* Total length */ > > -} odp_buffer_scatter_t; > > - > > - > > -/* > > - * Chunk of buffers (in single pool) > > - */ > > -typedef struct odp_buffer_chunk_t { > > - uint32_t num_bufs; /* num buffers */ > > - uint32_t buf_index[ODP_BUFS_PER_CHUNK]; /* buffers */ > > -} odp_buffer_chunk_t; > > - > > - > > /* Common buffer header */ > > typedef struct odp_buffer_hdr_t { > > struct odp_buffer_hdr_t *next; /* next buf in a list */ > > + int allocator; /* allocating thread id */ > > odp_buffer_bits_t handle; /* handle */ > > - odp_phys_addr_t phys_addr; /* physical data start > address */ > > - void *addr; /* virtual data start address > */ > > - uint32_t index; /* buf index in the pool */ > > + union { > > + uint32_t all; > > + struct { > > + uint32_t zeroized:1; /* Zeroize buf data on free */ > > + uint32_t hdrdata:1; /* Data is in buffer hdr */ > > + }; > > + } flags; > > + int type; /* buffer type */ > > size_t size; /* max data size */ > > - size_t cur_offset; /* current offset */ > > odp_atomic_u32_t ref_count; /* reference count */ > > - odp_buffer_scatter_t scatter; /* Scatter/gather list */ > > - int type; /* type of next header */ > > odp_buffer_pool_t pool_hdl; /* buffer pool handle */ > > - > > + union { > > + void *buf_ctx; /* user context */ > > + void *udata_addr; /* user metadata addr */ > > + }; > > + size_t udata_size; /* size of user metadata */ > > + uint32_t segcount; /* segment count */ > > + uint32_t segsize; /* segment size */ > > + void *addr[ODP_BUFFER_MAX_SEG]; /* block addrs > */ > > } odp_buffer_hdr_t; > > > > -/* Ensure next header starts from 8 byte align */ > > -ODP_STATIC_ASSERT((sizeof(odp_buffer_hdr_t) % 8) == 0, > "ODP_BUFFER_HDR_T__SIZE_ERROR"); > > +typedef struct odp_buffer_hdr_stride { > > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t))]; > > +} odp_buffer_hdr_stride; > > > > +typedef struct odp_buf_blk_t { > > + struct odp_buf_blk_t *next; > > + struct odp_buf_blk_t *prev; > > +} odp_buf_blk_t; > > > > /* Raw buffer header */ > > typedef struct { > > odp_buffer_hdr_t buf_hdr; /* common buffer header */ > > - uint8_t buf_data[]; /* start of buffer data area */ > > } odp_raw_buffer_hdr_t; > > > > - > > -/* Chunk header */ > > -typedef struct odp_buffer_chunk_hdr_t { > > - odp_buffer_hdr_t buf_hdr; > > - odp_buffer_chunk_t chunk; > > -} odp_buffer_chunk_hdr_t; > > - > > - > > -int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > > - > > -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t > buf_src); > > - > > +/* Forward declarations */ > > +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool, size_t size); > > > > #ifdef __cplusplus > > } > > diff --git a/platform/linux-generic/include/odp_buffer_pool_internal.h > b/platform/linux-generic/include/odp_buffer_pool_internal.h > > index e0210bd..cd58f91 100644 > > --- a/platform/linux-generic/include/odp_buffer_pool_internal.h > > +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h > > @@ -25,6 +25,35 @@ extern "C" { > > #include <odp_hints.h> > > #include <odp_config.h> > > #include <odp_debug.h> > > +#include <odp_shared_memory.h> > > +#include <odp_atomic.h> > > +#include <odp_atomic_internal.h> > > +#include <string.h> > > + > > +/** > > + * Buffer initialization routine prototype > > + * > > + * @note Routines of this type MAY be passed as part of the > > + * _odp_buffer_pool_init_t structure to be called whenever a > > + * buffer is allocated to initialize the user metadata > > + * associated with that buffer. > > + */ > > +typedef void (_odp_buf_init_t)(odp_buffer_t buf, void *buf_init_arg); > > + > > +/** > > + * Buffer pool initialization parameters > > + * > > + * @param[in] udata_size Size of the user metadata for each buffer > > + * @param[in] buf_init Function pointer to be called to > initialize the > > + * user metadata for each buffer in the pool. > > + * @param[in] buf_init_arg Argument to be passed to buf_init(). > > + * > > + */ > > +typedef struct _odp_buffer_pool_init_t { > > + size_t udata_size; /**< Size of user metadata for each > buffer */ > > + _odp_buf_init_t *buf_init; /**< Buffer initialization routine to > use */ > > + void *buf_init_arg; /**< Argument to be passed to > buf_init() */ > > +} _odp_buffer_pool_init_t; /**< Type of buffer initialization > struct */ > > > > /* Use ticketlock instead of spinlock */ > > #define POOL_USE_TICKETLOCK > > @@ -39,6 +68,17 @@ extern "C" { > > #include <odp_spinlock.h> > > #endif > > > > +#ifdef POOL_USE_TICKETLOCK > > +#include <odp_ticketlock.h> > > +#define LOCK(a) odp_ticketlock_lock(a) > > +#define UNLOCK(a) odp_ticketlock_unlock(a) > > +#define LOCK_INIT(a) odp_ticketlock_init(a) > > +#else > > +#include <odp_spinlock.h> > > +#define LOCK(a) odp_spinlock_lock(a) > > +#define UNLOCK(a) odp_spinlock_unlock(a) > > +#define LOCK_INIT(a) odp_spinlock_init(a) > > +#endif > > > > struct pool_entry_s { > > #ifdef POOL_USE_TICKETLOCK > > @@ -47,66 +87,224 @@ struct pool_entry_s { > > odp_spinlock_t lock ODP_ALIGNED_CACHE; > > #endif > > > > - odp_buffer_chunk_hdr_t *head; > > - uint64_t free_bufs; > > char name[ODP_BUFFER_POOL_NAME_LEN]; > > - > > - odp_buffer_pool_t pool_hdl ODP_ALIGNED_CACHE; > > - uintptr_t buf_base; > > - size_t buf_size; > > - size_t buf_offset; > > - uint64_t num_bufs; > > - void *pool_base_addr; > > - uint64_t pool_size; > > - size_t user_size; > > - size_t user_align; > > - int buf_type; > > - size_t hdr_size; > > + odp_buffer_pool_param_t params; > > + _odp_buffer_pool_init_t init_params; > > + odp_buffer_pool_t pool_hdl; > > + odp_shm_t pool_shm; > > + union { > > + uint32_t all; > > + struct { > > + uint32_t has_name:1; > > + uint32_t user_supplied_shm:1; > > + uint32_t unsegmented:1; > > + uint32_t zeroized:1; > > + uint32_t quiesced:1; > > + uint32_t low_wm_assert:1; > > + uint32_t predefined:1; > > + }; > > + } flags; > > + uint8_t *pool_base_addr; > > + size_t pool_size; > > + uint32_t buf_stride; > > + _odp_atomic_ptr_t buf_freelist; > Minor: Consider renaming it as seg_freelist as pool is a collection of > segments > and Buffer is a logical term. > > + _odp_atomic_ptr_t blk_freelist; > > + odp_atomic_u32_t bufcount; > > + odp_atomic_u32_t blkcount; > > + odp_atomic_u64_t bufallocs; > > + odp_atomic_u64_t buffrees; > > + odp_atomic_u64_t blkallocs; > > + odp_atomic_u64_t blkfrees; > > + odp_atomic_u64_t bufempty; > > + odp_atomic_u64_t blkempty; > > + odp_atomic_u64_t high_wm_count; > > + odp_atomic_u64_t low_wm_count; > > + size_t seg_size; > > + size_t high_wm; > > + size_t low_wm; > > + size_t headroom; > > + size_t tailroom; > > }; > > > > +typedef union pool_entry_u { > > + struct pool_entry_s s; > > + > > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct > pool_entry_s))]; > > +} pool_entry_t; > > > > extern void *pool_entry_ptr[]; > > > > +#if defined(ODP_CONFIG_SECURE_POOLS) && (ODP_CONFIG_SECURE_POOLS == 1) > > +#define buffer_is_secure(buf) (buf->flags.zeroized) > > +#define pool_is_secure(pool) (pool->flags.zeroized) > > +#else > > +#define buffer_is_secure(buf) 0 > > +#define pool_is_secure(pool) 0 > > +#endif > > + > > +#define TAG_ALIGN ((size_t)16) > > > > -static inline void *get_pool_entry(uint32_t pool_id) > > +#define odp_cs(ptr, old, new) \ > > + _odp_atomic_ptr_cmp_xchg_strong(&ptr, (void **)&old, (void *)new, \ > > + _ODP_MEMMODEL_SC, \ > > + _ODP_MEMMODEL_SC) > > + > > +/* Helper functions for pointer tagging to avoid ABA race conditions */ > > +#define odp_tag(ptr) \ > > + (((size_t)ptr) & (TAG_ALIGN - 1)) > > + > > +#define odp_detag(ptr) \ > > + ((typeof(ptr))(((size_t)ptr) & -TAG_ALIGN)) > > + > > +#define odp_retag(ptr, tag) \ > > + ((typeof(ptr))(((size_t)ptr) | odp_tag(tag))) > > + > > + > > +static inline void *get_blk(struct pool_entry_s *pool) > > { > > - return pool_entry_ptr[pool_id]; > > + void *oldhead, *myhead, *newhead; > > + > > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, > _ODP_MEMMODEL_ACQ); > > + > > + do { > > + size_t tag = odp_tag(oldhead); > > + myhead = odp_detag(oldhead); > > + if (myhead == NULL) > > + break; > > + newhead = odp_retag(((odp_buf_blk_t *)myhead)->next, tag + > 1); > > + } while (odp_cs(pool->blk_freelist, oldhead, newhead) == 0); > > + > > + if (myhead == NULL) { > > + odp_atomic_inc_u64(&pool->blkempty); > > + } else { > > + uint64_t blkcount = > > + odp_atomic_fetch_sub_u32(&pool->blkcount, 1); > > + > > + /* Check for low watermark condition */ > > + if (blkcount == pool->low_wm) { > > + LOCK(&pool->lock); > > + if (blkcount <= pool->low_wm && > > + !pool->flags.low_wm_assert) { > > + pool->flags.low_wm_assert = 1; > > + odp_atomic_inc_u64(&pool->low_wm_count); > > + } > > + UNLOCK(&pool->lock); > > + } > > + odp_atomic_inc_u64(&pool->blkallocs); > > + } > > + > > + return (void *)myhead; > > } > > > > +static inline void ret_blk(struct pool_entry_s *pool, void *block) > > +{ > > + void *oldhead, *myhead, *myblock; > > + > > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, > _ODP_MEMMODEL_ACQ); > > > > -static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > > + do { > > + size_t tag = odp_tag(oldhead); > > + myhead = odp_detag(oldhead); > > + ((odp_buf_blk_t *)block)->next = myhead; > > + myblock = odp_retag(block, tag + 1); > > + } while (odp_cs(pool->blk_freelist, oldhead, myblock) == 0); > > + > > + odp_atomic_inc_u64(&pool->blkfrees); > > + uint64_t blkcount = odp_atomic_fetch_add_u32(&pool->blkcount, 1); > > + > > + /* Check if low watermark condition should be deasserted */ > > + if (blkcount == pool->high_wm) { > > + LOCK(&pool->lock); > > + if (blkcount == pool->high_wm && > pool->flags.low_wm_assert) { > > + pool->flags.low_wm_assert = 0; > > + odp_atomic_inc_u64(&pool->high_wm_count); > > + } > > + UNLOCK(&pool->lock); > > + } > > +} > > + > > +static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool) > > { > > - odp_buffer_bits_t handle; > > - uint32_t pool_id; > > - uint32_t index; > > - struct pool_entry_s *pool; > > - odp_buffer_hdr_t *hdr; > > - > > - handle.u32 = buf; > > - pool_id = handle.pool_id; > > - index = handle.index; > > - > > -#ifdef POOL_ERROR_CHECK > > - if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { > > - ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); > > - return NULL; > > + odp_buffer_hdr_t *oldhead, *myhead, *newhead; > > + > > + oldhead = _odp_atomic_ptr_load(&pool->buf_freelist, > _ODP_MEMMODEL_ACQ); > > + > > + do { > > + size_t tag = odp_tag(oldhead); > > + myhead = odp_detag(oldhead); > > + if (myhead == NULL) > > + break; > > + newhead = odp_retag(myhead->next, tag + 1); > > + } while (odp_cs(pool->buf_freelist, oldhead, newhead) == 0); > > + > > + if (myhead != NULL) { > > + myhead->next = myhead; > > + myhead->allocator = odp_thread_id(); > > + odp_atomic_inc_u32(&pool->bufcount); > > + odp_atomic_inc_u64(&pool->bufallocs); > > + } else { > > + odp_atomic_inc_u64(&pool->bufempty); > > } > > -#endif > > > > - pool = get_pool_entry(pool_id); > > + return (void *)myhead; > > +} > > + > > +static inline void ret_buf(struct pool_entry_s *pool, odp_buffer_hdr_t > *buf) > > +{ > > + odp_buffer_hdr_t *oldhead, *myhead, *mybuf; > > > > -#ifdef POOL_ERROR_CHECK > > - if (odp_unlikely(index > pool->num_bufs - 1)) { > > - ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); > > - return NULL; > > + if (!buf->flags.hdrdata && buf->type != ODP_BUFFER_TYPE_RAW) { > > + while (buf->segcount > 0) { > > + if (buffer_is_secure(buf) || pool_is_secure(pool)) > > + memset(buf->addr[buf->segcount - 1], > > + 0, buf->segsize); > > + ret_blk(pool, buf->addr[--buf->segcount]); > > + } > > + buf->size = 0; > > } > > -#endif > > > > - hdr = (odp_buffer_hdr_t *)(pool->buf_base + index * > pool->buf_size); > > + oldhead = _odp_atomic_ptr_load(&pool->buf_freelist, > _ODP_MEMMODEL_ACQ); > > > > - return hdr; > > + do { > > + size_t tag = odp_tag(oldhead); > > + myhead = odp_detag(oldhead); > > + buf->next = myhead; > > + mybuf = odp_retag(buf, tag + 1); > > + } while (odp_cs(pool->buf_freelist, oldhead, mybuf) == 0); > > + > > + odp_atomic_dec_u32(&pool->bufcount); > > + odp_atomic_inc_u64(&pool->buffrees); > > +} > > + > > +static inline odp_buffer_pool_t pool_index_to_handle(uint32_t pool_id) > > +{ > > + return pool_id + 1; > > } > > > > +static inline uint32_t pool_handle_to_index(odp_buffer_pool_t pool_hdl) > > +{ > > + return pool_hdl - 1; > > +} > > + > > +static inline void *get_pool_entry(uint32_t pool_id) > > +{ > > + return pool_entry_ptr[pool_id]; > > +} > > + > > +static inline pool_entry_t *odp_pool_to_entry(odp_buffer_pool_t pool) > > +{ > > + return (pool_entry_t *)get_pool_entry(pool_handle_to_index(pool)); > > +} > > + > > +static inline pool_entry_t *odp_buf_to_pool(odp_buffer_hdr_t *buf) > > +{ > > + return odp_pool_to_entry(buf->pool_hdl); > > +} > > + > > +static inline size_t odp_buffer_pool_segment_size(odp_buffer_pool_t > pool) > > +{ > > + return odp_pool_to_entry(pool)->s.seg_size; > > +} > > > > #ifdef __cplusplus > > } > > diff --git a/platform/linux-generic/include/odp_packet_internal.h > b/platform/linux-generic/include/odp_packet_internal.h > > index 49c59b2..f34a83d 100644 > > --- a/platform/linux-generic/include/odp_packet_internal.h > > +++ b/platform/linux-generic/include/odp_packet_internal.h > > @@ -22,6 +22,7 @@ extern "C" { > > #include <odp_debug.h> > > #include <odp_buffer_internal.h> > > #include <odp_buffer_pool_internal.h> > > +#include <odp_buffer_inlines.h> > > #include <odp_packet.h> > > #include <odp_packet_io.h> > > > > @@ -92,7 +93,8 @@ typedef union { > > }; > > } output_flags_t; > > > > -ODP_STATIC_ASSERT(sizeof(output_flags_t) == sizeof(uint32_t), > "OUTPUT_FLAGS_SIZE_ERROR"); > > +ODP_STATIC_ASSERT(sizeof(output_flags_t) == sizeof(uint32_t), > > + "OUTPUT_FLAGS_SIZE_ERROR"); > > > > /** > > * Internal Packet header > > @@ -105,25 +107,23 @@ typedef struct { > > error_flags_t error_flags; > > output_flags_t output_flags; > > > > - uint32_t frame_offset; /**< offset to start of frame, even on > error */ > > uint32_t l2_offset; /**< offset to L2 hdr, e.g. Eth */ > > uint32_t l3_offset; /**< offset to L3 hdr, e.g. IPv4, IPv6 */ > > uint32_t l4_offset; /**< offset to L4 hdr (TCP, UDP, SCTP, also > ICMP) */ > > > > uint32_t frame_len; > > + uint32_t headroom; > > + uint32_t tailroom; > > > > uint64_t user_ctx; /* user context */ > > > > odp_pktio_t input; > > - > > - uint32_t pad; > > - uint8_t buf_data[]; /* start of buffer data area */ > > } odp_packet_hdr_t; > > > > -ODP_STATIC_ASSERT(sizeof(odp_packet_hdr_t) == > ODP_OFFSETOF(odp_packet_hdr_t, buf_data), > > - "ODP_PACKET_HDR_T__SIZE_ERR"); > > -ODP_STATIC_ASSERT(sizeof(odp_packet_hdr_t) % sizeof(uint64_t) == 0, > > - "ODP_PACKET_HDR_T__SIZE_ERR2"); > > +typedef struct odp_packet_hdr_stride { > > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_packet_hdr_t))]; > > +} odp_packet_hdr_stride; > > + > > > > /** > > * Return the packet header > > @@ -138,6 +138,38 @@ static inline odp_packet_hdr_t > *odp_packet_hdr(odp_packet_t pkt) > > */ > > void odp_packet_parse(odp_packet_t pkt, size_t len, size_t l2_offset); > > > > +/** > > + * Initialize packet buffer > > + */ > > +static inline void packet_init(pool_entry_t *pool, > > + odp_packet_hdr_t *pkt_hdr, > > + size_t size) > > +{ > > + /* > > + * Reset parser metadata. Note that we clear via memset to make > > + * this routine indepenent of any additional adds to packet > metadata. > > + */ > > + const size_t start_offset = ODP_FIELD_SIZEOF(odp_packet_hdr_t, > buf_hdr); > > + uint8_t *start; > > + size_t len; > > + > > + start = (uint8_t *)pkt_hdr + start_offset; > > + len = sizeof(odp_packet_hdr_t) - start_offset; > > + memset(start, 0, len); > > + > > + /* > > + * Packet headroom is set from the pool's headroom > > + * Packet tailroom is rounded up to fill the last > > + * segment occupied by the allocated length. > > + */ > > + pkt_hdr->frame_len = size; > > + pkt_hdr->headroom = pool->s.headroom; > > + pkt_hdr->tailroom = > > + (pool->s.seg_size * pkt_hdr->buf_hdr.segcount) - > > + (pool->s.headroom + size); > > +} > > + > > + > > #ifdef __cplusplus > > } > > #endif > > diff --git a/platform/linux-generic/include/odp_timer_internal.h > b/platform/linux-generic/include/odp_timer_internal.h > > index ad28f53..2ff36ce 100644 > > --- a/platform/linux-generic/include/odp_timer_internal.h > > +++ b/platform/linux-generic/include/odp_timer_internal.h > > @@ -51,14 +51,9 @@ typedef struct odp_timeout_hdr_t { > > uint8_t buf_data[]; > > } odp_timeout_hdr_t; > > > > - > > - > > -ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) == > > - ODP_OFFSETOF(odp_timeout_hdr_t, buf_data), > > - "ODP_TIMEOUT_HDR_T__SIZE_ERR"); > > - > > -ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) % sizeof(uint64_t) == 0, > > - "ODP_TIMEOUT_HDR_T__SIZE_ERR2"); > > +typedef struct odp_timeout_hdr_stride { > > + uint8_t > pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_timeout_hdr_t))]; > > +} odp_timeout_hdr_stride; > > > > > > /** > > diff --git a/platform/linux-generic/odp_buffer.c > b/platform/linux-generic/odp_buffer.c > > index bcbb99a..366190c 100644 > > --- a/platform/linux-generic/odp_buffer.c > > +++ b/platform/linux-generic/odp_buffer.c > > @@ -5,8 +5,9 @@ > > */ > > > > #include <odp_buffer.h> > > -#include <odp_buffer_internal.h> > > #include <odp_buffer_pool_internal.h> > > +#include <odp_buffer_internal.h> > > +#include <odp_buffer_inlines.h> > > > > #include <string.h> > > #include <stdio.h> > > @@ -16,7 +17,7 @@ void *odp_buffer_addr(odp_buffer_t buf) > > { > > odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf); > > > > - return hdr->addr; > > + return hdr->addr[0]; > > } > > > > > > @@ -38,11 +39,7 @@ int odp_buffer_type(odp_buffer_t buf) > > > > int odp_buffer_is_valid(odp_buffer_t buf) > > { > > - odp_buffer_bits_t handle; > > - > > - handle.u32 = buf; > > - > > - return (handle.index != ODP_BUFFER_INVALID_INDEX); > > + return validate_buf(buf) != NULL; > > } > > > > > > @@ -63,28 +60,14 @@ int odp_buffer_snprint(char *str, size_t n, > odp_buffer_t buf) > > len += snprintf(&str[len], n-len, > > " pool %i\n", hdr->pool_hdl); > > len += snprintf(&str[len], n-len, > > - " index %"PRIu32"\n", hdr->index); > > - len += snprintf(&str[len], n-len, > > - " phy_addr %"PRIu64"\n", hdr->phys_addr); > > - len += snprintf(&str[len], n-len, > > " addr %p\n", hdr->addr); > > len += snprintf(&str[len], n-len, > > " size %zu\n", hdr->size); > > len += snprintf(&str[len], n-len, > > - " cur_offset %zu\n", hdr->cur_offset); > > - len += snprintf(&str[len], n-len, > > " ref_count %i\n", > > odp_atomic_load_u32(&hdr->ref_count)); > > len += snprintf(&str[len], n-len, > > " type %i\n", hdr->type); > > - len += snprintf(&str[len], n-len, > > - " Scatter list\n"); > > - len += snprintf(&str[len], n-len, > > - " num_bufs %i\n", > hdr->scatter.num_bufs); > > - len += snprintf(&str[len], n-len, > > - " pos %i\n", hdr->scatter.pos); > > - len += snprintf(&str[len], n-len, > > - " total_len %zu\n", > hdr->scatter.total_len); > > > > return len; > > } > > @@ -101,9 +84,3 @@ void odp_buffer_print(odp_buffer_t buf) > > > > ODP_PRINT("\n%s\n", str); > > } > > - > > -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t buf_src) > > -{ > > - (void)buf_dst; > > - (void)buf_src; > > -} > > diff --git a/platform/linux-generic/odp_buffer_pool.c > b/platform/linux-generic/odp_buffer_pool.c > > index 6a0a6b2..f545090 100644 > > --- a/platform/linux-generic/odp_buffer_pool.c > > +++ b/platform/linux-generic/odp_buffer_pool.c > > @@ -6,8 +6,9 @@ > > > > #include <odp_std_types.h> > > #include <odp_buffer_pool.h> > > -#include <odp_buffer_pool_internal.h> > > #include <odp_buffer_internal.h> > > +#include <odp_buffer_pool_internal.h> > > +#include <odp_buffer_inlines.h> > > #include <odp_packet_internal.h> > > #include <odp_timer_internal.h> > > #include <odp_shared_memory.h> > > @@ -16,57 +17,35 @@ > > #include <odp_config.h> > > #include <odp_hints.h> > > #include <odp_debug.h> > > +#include <odp_atomic_internal.h> > > > > #include <string.h> > > #include <stdlib.h> > > > > > > -#ifdef POOL_USE_TICKETLOCK > > -#include <odp_ticketlock.h> > > -#define LOCK(a) odp_ticketlock_lock(a) > > -#define UNLOCK(a) odp_ticketlock_unlock(a) > > -#define LOCK_INIT(a) odp_ticketlock_init(a) > > -#else > > -#include <odp_spinlock.h> > > -#define LOCK(a) odp_spinlock_lock(a) > > -#define UNLOCK(a) odp_spinlock_unlock(a) > > -#define LOCK_INIT(a) odp_spinlock_init(a) > > -#endif > > - > > - > > #if ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS > > #error ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS > > #endif > > > > -#define NULL_INDEX ((uint32_t)-1) > > > > -union buffer_type_any_u { > > +typedef union buffer_type_any_u { > > odp_buffer_hdr_t buf; > > odp_packet_hdr_t pkt; > > odp_timeout_hdr_t tmo; > > -}; > > - > > -ODP_STATIC_ASSERT((sizeof(union buffer_type_any_u) % 8) == 0, > > - "BUFFER_TYPE_ANY_U__SIZE_ERR"); > > +} odp_anybuf_t; > > > > /* Any buffer type header */ > > typedef struct { > > union buffer_type_any_u any_hdr; /* any buffer type */ > > - uint8_t buf_data[]; /* start of buffer data area */ > > } odp_any_buffer_hdr_t; > > > > - > > -typedef union pool_entry_u { > > - struct pool_entry_s s; > > - > > - uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct > pool_entry_s))]; > > - > > -} pool_entry_t; > > +typedef struct odp_any_hdr_stride { > > + uint8_t > pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_any_buffer_hdr_t))]; > > +} odp_any_hdr_stride; > > > > > > typedef struct pool_table_t { > > pool_entry_t pool[ODP_CONFIG_BUFFER_POOLS]; > > - > > } pool_table_t; > > > > > > @@ -77,38 +56,6 @@ static pool_table_t *pool_tbl; > > void *pool_entry_ptr[ODP_CONFIG_BUFFER_POOLS]; > > > > > > -static __thread odp_buffer_chunk_hdr_t > *local_chunk[ODP_CONFIG_BUFFER_POOLS]; > > - > > - > > -static inline odp_buffer_pool_t pool_index_to_handle(uint32_t pool_id) > > -{ > > - return pool_id + 1; > > -} > > - > > - > > -static inline uint32_t pool_handle_to_index(odp_buffer_pool_t pool_hdl) > > -{ > > - return pool_hdl -1; > > -} > > - > > - > > -static inline void set_handle(odp_buffer_hdr_t *hdr, > > - pool_entry_t *pool, uint32_t index) > > -{ > > - odp_buffer_pool_t pool_hdl = pool->s.pool_hdl; > > - uint32_t pool_id = pool_handle_to_index(pool_hdl); > > - > > - if (pool_id >= ODP_CONFIG_BUFFER_POOLS) > > - ODP_ABORT("set_handle: Bad pool handle %u\n", pool_hdl); > > - > > - if (index > ODP_BUFFER_MAX_INDEX) > > - ODP_ERR("set_handle: Bad buffer index\n"); > > - > > - hdr->handle.pool_id = pool_id; > > - hdr->handle.index = index; > > -} > > - > > - > > int odp_buffer_pool_init_global(void) > > { > > uint32_t i; > > @@ -142,269 +89,244 @@ int odp_buffer_pool_init_global(void) > > return 0; > > } > > > > +/** > > + * Buffer pool creation > > + */ > > > > -static odp_buffer_hdr_t *index_to_hdr(pool_entry_t *pool, uint32_t > index) > > -{ > > - odp_buffer_hdr_t *hdr; > > - > > - hdr = (odp_buffer_hdr_t *)(pool->s.buf_base + index * > pool->s.buf_size); > > - return hdr; > > -} > > - > > - > > -static void add_buf_index(odp_buffer_chunk_hdr_t *chunk_hdr, uint32_t > index) > > -{ > > - uint32_t i = chunk_hdr->chunk.num_bufs; > > - chunk_hdr->chunk.buf_index[i] = index; > > - chunk_hdr->chunk.num_bufs++; > > -} > > - > > - > > -static uint32_t rem_buf_index(odp_buffer_chunk_hdr_t *chunk_hdr) > > +odp_buffer_pool_t odp_buffer_pool_create(const char *name, > > + odp_shm_t shm, > > + odp_buffer_pool_param_t *params) > > { > > - uint32_t index; > > + odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID; > > + pool_entry_t *pool; > > uint32_t i; > > > > - i = chunk_hdr->chunk.num_bufs - 1; > > - index = chunk_hdr->chunk.buf_index[i]; > > - chunk_hdr->chunk.num_bufs--; > > - return index; > > -} > > - > > - > > -static odp_buffer_chunk_hdr_t *next_chunk(pool_entry_t *pool, > > - odp_buffer_chunk_hdr_t > *chunk_hdr) > > -{ > > - uint32_t index; > > - > > - index = chunk_hdr->chunk.buf_index[ODP_BUFS_PER_CHUNK-1]; > > - if (index == NULL_INDEX) > > - return NULL; > > - else > > - return (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, index); > > -} > > - > > - > > -static odp_buffer_chunk_hdr_t *rem_chunk(pool_entry_t *pool) > > -{ > > - odp_buffer_chunk_hdr_t *chunk_hdr; > > - > > - chunk_hdr = pool->s.head; > > - if (chunk_hdr == NULL) { > > - /* Pool is empty */ > > - return NULL; > > - } > > - > > - pool->s.head = next_chunk(pool, chunk_hdr); > > - pool->s.free_bufs -= ODP_BUFS_PER_CHUNK; > > + /* Default initialization paramters */ > > + static _odp_buffer_pool_init_t default_init_params = { > > + .udata_size = 0, > > + .buf_init = NULL, > > + .buf_init_arg = NULL, > > + }; > > > > - /* unlink */ > > - rem_buf_index(chunk_hdr); > > - return chunk_hdr; > > -} > > + _odp_buffer_pool_init_t *init_params = &default_init_params; > > > > + if (params == NULL) > > + return ODP_BUFFER_POOL_INVALID; > > > > -static void add_chunk(pool_entry_t *pool, odp_buffer_chunk_hdr_t > *chunk_hdr) > > -{ > > - if (pool->s.head) /* link pool head to the chunk */ > > - add_buf_index(chunk_hdr, pool->s.head->buf_hdr.index); > > - else > > - add_buf_index(chunk_hdr, NULL_INDEX); > > + /* Restriction for v1.0: All buffers are unsegmented */ > > + const int unsegmented = 1; > > > > - pool->s.head = chunk_hdr; > > - pool->s.free_bufs += ODP_BUFS_PER_CHUNK; > > -} > > + /* Restriction for v1.0: No zeroization support */ > > + const int zeroized = 0; > > > > + /* Restriction for v1.0: No udata support */ > > + uint32_t udata_stride = (init_params->udata_size > sizeof(void *)) > ? > > + ODP_CACHE_LINE_SIZE_ROUNDUP(init_params->udata_size) : > > + 0; > > > > -static void check_align(pool_entry_t *pool, odp_buffer_hdr_t *hdr) > > -{ > > - if (!ODP_ALIGNED_CHECK_POWER_2(hdr->addr, pool->s.user_align)) { > > - ODP_ABORT("check_align: user data align error %p, align > %zu\n", > > - hdr->addr, pool->s.user_align); > > - } > > - > > - if (!ODP_ALIGNED_CHECK_POWER_2(hdr, ODP_CACHE_LINE_SIZE)) { > > - ODP_ABORT("check_align: hdr align error %p, align %i\n", > > - hdr, ODP_CACHE_LINE_SIZE); > > - } > > -} > > - > > + uint32_t blk_size, buf_stride; > > > > -static void fill_hdr(void *ptr, pool_entry_t *pool, uint32_t index, > > - int buf_type) > > -{ > > - odp_buffer_hdr_t *hdr = (odp_buffer_hdr_t *)ptr; > > - size_t size = pool->s.hdr_size; > > - uint8_t *buf_data; > > - > > - if (buf_type == ODP_BUFFER_TYPE_CHUNK) > > - size = sizeof(odp_buffer_chunk_hdr_t); > > + switch (params->buf_type) { > > + case ODP_BUFFER_TYPE_RAW: > > + blk_size = params->buf_size; > > > > - switch (pool->s.buf_type) { > > - odp_raw_buffer_hdr_t *raw_hdr; > > - odp_packet_hdr_t *packet_hdr; > > - odp_timeout_hdr_t *tmo_hdr; > > - odp_any_buffer_hdr_t *any_hdr; > > + /* Optimize small raw buffers */ > > + if (blk_size > ODP_MAX_INLINE_BUF) > > + blk_size = ODP_ALIGN_ROUNDUP(blk_size, TAG_ALIGN); > > > > - case ODP_BUFFER_TYPE_RAW: > > - raw_hdr = ptr; > > - buf_data = raw_hdr->buf_data; > > + buf_stride = sizeof(odp_buffer_hdr_stride); > > break; > > + > > case ODP_BUFFER_TYPE_PACKET: > > - packet_hdr = ptr; > > - buf_data = packet_hdr->buf_data; > > + if (unsegmented) > > + blk_size = > > + > ODP_CACHE_LINE_SIZE_ROUNDUP(params->buf_size); > > + else > > + blk_size = ODP_ALIGN_ROUNDUP(params->buf_size, > > + > ODP_CONFIG_BUF_SEG_SIZE); > > + buf_stride = sizeof(odp_packet_hdr_stride); > > break; > > + > > case ODP_BUFFER_TYPE_TIMEOUT: > > - tmo_hdr = ptr; > > - buf_data = tmo_hdr->buf_data; > > + blk_size = 0; /* Timeouts have no block data, only > metadata */ > > + buf_stride = sizeof(odp_timeout_hdr_stride); > > break; > > + > > case ODP_BUFFER_TYPE_ANY: > > - any_hdr = ptr; > > - buf_data = any_hdr->buf_data; > > + if (unsegmented) > > + blk_size = > > + > ODP_CACHE_LINE_SIZE_ROUNDUP(params->buf_size); > > + else > > + blk_size = ODP_ALIGN_ROUNDUP(params->buf_size, > > + > ODP_CONFIG_BUF_SEG_SIZE); > > + buf_stride = sizeof(odp_any_hdr_stride); > > break; > > - default: > > - ODP_ABORT("Bad buffer type\n"); > > - } > > - > > - memset(hdr, 0, size); > > - > > - set_handle(hdr, pool, index); > > - > > - hdr->addr = &buf_data[pool->s.buf_offset - pool->s.hdr_size]; > > - hdr->index = index; > > - hdr->size = pool->s.user_size; > > - hdr->pool_hdl = pool->s.pool_hdl; > > - hdr->type = buf_type; > > - > > - check_align(pool, hdr); > > -} > > - > > - > > -static void link_bufs(pool_entry_t *pool) > > -{ > > - odp_buffer_chunk_hdr_t *chunk_hdr; > > - size_t hdr_size; > > - size_t data_size; > > - size_t data_align; > > - size_t tot_size; > > - size_t offset; > > - size_t min_size; > > - uint64_t pool_size; > > - uintptr_t buf_base; > > - uint32_t index; > > - uintptr_t pool_base; > > - int buf_type; > > - > > - buf_type = pool->s.buf_type; > > - data_size = pool->s.user_size; > > - data_align = pool->s.user_align; > > - pool_size = pool->s.pool_size; > > - pool_base = (uintptr_t) pool->s.pool_base_addr; > > - > > - if (buf_type == ODP_BUFFER_TYPE_RAW) { > > - hdr_size = sizeof(odp_raw_buffer_hdr_t); > > - } else if (buf_type == ODP_BUFFER_TYPE_PACKET) { > > - hdr_size = sizeof(odp_packet_hdr_t); > > - } else if (buf_type == ODP_BUFFER_TYPE_TIMEOUT) { > > - hdr_size = sizeof(odp_timeout_hdr_t); > > - } else if (buf_type == ODP_BUFFER_TYPE_ANY) { > > - hdr_size = sizeof(odp_any_buffer_hdr_t); > > - } else > > - ODP_ABORT("odp_buffer_pool_create: Bad type %i\n", > buf_type); > > - > > - > > - /* Chunk must fit into buffer data area.*/ > > - min_size = sizeof(odp_buffer_chunk_hdr_t) - hdr_size; > > - if (data_size < min_size) > > - data_size = min_size; > > - > > - /* Roundup data size to full cachelines */ > > - data_size = ODP_CACHE_LINE_SIZE_ROUNDUP(data_size); > > - > > - /* Min cacheline alignment for buffer header and data */ > > - data_align = ODP_CACHE_LINE_SIZE_ROUNDUP(data_align); > > - offset = ODP_CACHE_LINE_SIZE_ROUNDUP(hdr_size); > > - > > - /* Multiples of cacheline size */ > > - if (data_size > data_align) > > - tot_size = data_size + offset; > > - else > > - tot_size = data_align + offset; > > - > > - /* First buffer */ > > - buf_base = ODP_ALIGN_ROUNDUP(pool_base + offset, data_align) - > offset; > > - > > - pool->s.hdr_size = hdr_size; > > - pool->s.buf_base = buf_base; > > - pool->s.buf_size = tot_size; > > - pool->s.buf_offset = offset; > > - index = 0; > > - > > - chunk_hdr = (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, index); > > - pool->s.head = NULL; > > - pool_size -= buf_base - pool_base; > > - > > - while (pool_size > ODP_BUFS_PER_CHUNK * tot_size) { > > - int i; > > - > > - fill_hdr(chunk_hdr, pool, index, ODP_BUFFER_TYPE_CHUNK); > > - > > - index++; > > - > > - for (i = 0; i < ODP_BUFS_PER_CHUNK - 1; i++) { > > - odp_buffer_hdr_t *hdr = index_to_hdr(pool, index); > > - > > - fill_hdr(hdr, pool, index, buf_type); > > - > > - add_buf_index(chunk_hdr, index); > > - index++; > > - } > > - > > - add_chunk(pool, chunk_hdr); > > > > - chunk_hdr = (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, > > - index); > > - pool->s.num_bufs += ODP_BUFS_PER_CHUNK; > > - pool_size -= ODP_BUFS_PER_CHUNK * tot_size; > > + default: > > + return ODP_BUFFER_POOL_INVALID; > > } > > -} > > - > > - > > -odp_buffer_pool_t odp_buffer_pool_create(const char *name, > > - void *base_addr, uint64_t size, > > - size_t buf_size, size_t buf_align, > > - int buf_type) > > -{ > > - odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID; > > - pool_entry_t *pool; > > - uint32_t i; > > > > + /* Find an unused buffer pool slot and iniitalize it as requested > */ > > for (i = 0; i < ODP_CONFIG_BUFFER_POOLS; i++) { > > pool = get_pool_entry(i); > > > > LOCK(&pool->s.lock); > > + if (pool->s.pool_shm != ODP_SHM_INVALID) { > > + UNLOCK(&pool->s.lock); > > + continue; > > + } > > + > > + /* found free pool */ > > + size_t block_size, mdata_size, udata_size; > > > > - if (pool->s.buf_base == 0) { > > - /* found free pool */ > > + pool->s.flags.all = 0; > > > > + if (name == NULL) { > > + pool->s.name[0] = 0; > > + } else { > > strncpy(pool->s.name, name, > > ODP_BUFFER_POOL_NAME_LEN - 1); > > pool->s.name[ODP_BUFFER_POOL_NAME_LEN - 1] = 0; > > - pool->s.pool_base_addr = base_addr; > > - pool->s.pool_size = size; > > - pool->s.user_size = buf_size; > > - pool->s.user_align = buf_align; > > - pool->s.buf_type = buf_type; > > - > > - link_bufs(pool); > > - > > - UNLOCK(&pool->s.lock); > > + pool->s.flags.has_name = 1; > > + } > > > > - pool_hdl = pool->s.pool_hdl; > > - break; > > + pool->s.params = *params; > > + pool->s.init_params = *init_params; > > + > > + mdata_size = params->num_bufs * buf_stride; > > + udata_size = params->num_bufs * udata_stride; > > + > > + /* Optimize for short buffers: Data stored in buffer hdr */ > > + if (blk_size <= ODP_MAX_INLINE_BUF) > > + block_size = 0; > > + else > > + block_size = params->num_bufs * blk_size; > > + > > + pool->s.pool_size = ODP_PAGE_SIZE_ROUNDUP(mdata_size + > > + udata_size + > > + block_size); > > + > > + if (shm == ODP_SHM_NULL) { > > + shm = odp_shm_reserve(pool->s.name, > > + pool->s.pool_size, > > + ODP_PAGE_SIZE, 0); > > + if (shm == ODP_SHM_INVALID) { > > + UNLOCK(&pool->s.lock); > > + return ODP_BUFFER_INVALID; > > + } > > + pool->s.pool_base_addr = odp_shm_addr(shm); > > + } else { > > + odp_shm_info_t info; > > + if (odp_shm_info(shm, &info) != 0 || > > + info.size < pool->s.pool_size) { > > + UNLOCK(&pool->s.lock); > > + return ODP_BUFFER_POOL_INVALID; > > + } > > + pool->s.pool_base_addr = odp_shm_addr(shm); > > + void *page_addr = > > + > ODP_ALIGN_ROUNDUP_PTR(pool->s.pool_base_addr, > > + ODP_PAGE_SIZE); > > + if (pool->s.pool_base_addr != page_addr) { > > + if (info.size < pool->s.pool_size + > > + ((size_t)page_addr - > > + (size_t)pool->s.pool_base_addr)) { > > + UNLOCK(&pool->s.lock); > > + return ODP_BUFFER_POOL_INVALID; > > + } > > + pool->s.pool_base_addr = page_addr; > > + } > > + pool->s.flags.user_supplied_shm = 1; > > } > > > > + pool->s.pool_shm = shm; > > + > > + /* Now safe to unlock since pool entry has been allocated > */ > > UNLOCK(&pool->s.lock); > > + > > + pool->s.flags.unsegmented = unsegmented; > > + pool->s.flags.zeroized = zeroized; > > + pool->s.seg_size = unsegmented ? > > + blk_size : ODP_CONFIG_BUF_SEG_SIZE; > > + > > + uint8_t *udata_base_addr = pool->s.pool_base_addr + > mdata_size; > > + uint8_t *block_base_addr = udata_base_addr + udata_size; > > + > > + /* bufcount will decrement down to 0 as we populate > freelist */ > > + odp_atomic_store_u32(&pool->s.bufcount, params->num_bufs); > > + pool->s.buf_stride = buf_stride; > > + pool->s.high_wm = 0; > > + pool->s.low_wm = 0; > > + pool->s.headroom = 0; > > + pool->s.tailroom = 0; > > + _odp_atomic_ptr_store(&pool->s.buf_freelist, NULL, > > + _ODP_MEMMODEL_RLX); > > + _odp_atomic_ptr_store(&pool->s.blk_freelist, NULL, > > + _ODP_MEMMODEL_RLX); > > + > > + uint8_t *buf = udata_base_addr - buf_stride; > > + uint8_t *udat = udata_stride == 0 ? NULL : > > + block_base_addr - udata_stride; > > + > > + /* Init buffer common header and add to pool buffer > freelist */ > > + do { > > + odp_buffer_hdr_t *tmp = > > + (odp_buffer_hdr_t *)(void *)buf; > > + > > + /* Iniitalize buffer metadata */ > > + tmp->allocator = ODP_CONFIG_MAX_THREADS; > > + tmp->flags.all = 0; > > + tmp->flags.zeroized = zeroized; > > + tmp->size = 0; > > + odp_atomic_store_u32(&tmp->ref_count, 0); > > + tmp->type = params->buf_type; > > + tmp->pool_hdl = pool->s.pool_hdl; > > + tmp->udata_addr = (void *)udat; > > + tmp->udata_size = init_params->udata_size; > > + tmp->segcount = 0; > > + tmp->segsize = pool->s.seg_size; > > + tmp->handle.handle = odp_buffer_encode_handle(tmp); > > + > > + /* Set 1st seg addr for zero-len buffers */ > > + tmp->addr[0] = NULL; > > + > > + /* Special case for short buffer data */ > > + if (blk_size <= ODP_MAX_INLINE_BUF) { > > + tmp->flags.hdrdata = 1; > > + if (blk_size > 0) { > > + tmp->segcount = 1; > > + tmp->addr[0] = &tmp->addr[1]; > > + tmp->size = blk_size; > > + } > > + } > > + > > + /* Push buffer onto pool's freelist */ > > + ret_buf(&pool->s, tmp); > > + buf -= buf_stride; > > + udat -= udata_stride; > > + } while (buf >= pool->s.pool_base_addr); > > + > > + /* Form block freelist for pool */ > > + uint8_t *blk = pool->s.pool_base_addr + pool->s.pool_size - > > + pool->s.seg_size; > > + > > + if (blk_size > ODP_MAX_INLINE_BUF) > > + do { > > + ret_blk(&pool->s, blk); > > + blk -= pool->s.seg_size; > > + } while (blk >= block_base_addr); > > + > > + /* Initialize pool statistics counters */ > > + odp_atomic_store_u64(&pool->s.bufallocs, 0); > > + odp_atomic_store_u64(&pool->s.buffrees, 0); > > + odp_atomic_store_u64(&pool->s.blkallocs, 0); > > + odp_atomic_store_u64(&pool->s.blkfrees, 0); > > + odp_atomic_store_u64(&pool->s.bufempty, 0); > > + odp_atomic_store_u64(&pool->s.blkempty, 0); > > + odp_atomic_store_u64(&pool->s.high_wm_count, 0); > > + odp_atomic_store_u64(&pool->s.low_wm_count, 0); > > + > > + pool_hdl = pool->s.pool_hdl; > > + break; > > } > > > > return pool_hdl; > > @@ -431,145 +353,126 @@ odp_buffer_pool_t odp_buffer_pool_lookup(const > char *name) > > return ODP_BUFFER_POOL_INVALID; > > } > > > > - > > -odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl) > > +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool_hdl, size_t size) > > { > > - pool_entry_t *pool; > > - odp_buffer_chunk_hdr_t *chunk; > > - odp_buffer_bits_t handle; > > - uint32_t pool_id = pool_handle_to_index(pool_hdl); > > - > > - pool = get_pool_entry(pool_id); > > - chunk = local_chunk[pool_id]; > > - > > - if (chunk == NULL) { > > - LOCK(&pool->s.lock); > > - chunk = rem_chunk(pool); > > - UNLOCK(&pool->s.lock); > > - > > - if (chunk == NULL) > > - return ODP_BUFFER_INVALID; > > - > > - local_chunk[pool_id] = chunk; > > + pool_entry_t *pool = odp_pool_to_entry(pool_hdl); > > + size_t totsize = pool->s.headroom + size + pool->s.tailroom; > > + odp_anybuf_t *buf; > > + uint8_t *blk; > > + > > + if ((pool->s.flags.unsegmented && totsize > pool->s.seg_size) || > > + (!pool->s.flags.unsegmented && totsize > > ODP_CONFIG_BUF_MAX_SIZE)) > > + return ODP_BUFFER_INVALID; > > + > > + buf = (odp_anybuf_t *)(void *)get_buf(&pool->s); > > + > > + if (buf == NULL) > > + return ODP_BUFFER_INVALID; > > + > > + /* Get blocks for this buffer, if pool uses application data */ > > + if (buf->buf.size < totsize) { > > + size_t needed = totsize - buf->buf.size; > > + do { > > + blk = get_blk(&pool->s); > > + if (blk == NULL) { > > + ret_buf(&pool->s, &buf->buf); > > + return ODP_BUFFER_INVALID; > > + } > > + buf->buf.addr[buf->buf.segcount++] = blk; > > + needed -= pool->s.seg_size; > > + } while ((ssize_t)needed > 0); > > + buf->buf.size = buf->buf.segcount * pool->s.seg_size; > > } > > > > - if (chunk->chunk.num_bufs == 0) { > > - /* give the chunk buffer */ > > - local_chunk[pool_id] = NULL; > > - chunk->buf_hdr.type = pool->s.buf_type; > > + /* By default, buffers inherit their pool's zeroization setting */ > > + buf->buf.flags.zeroized = pool->s.flags.zeroized; > > > > - handle = chunk->buf_hdr.handle; > > - } else { > > - odp_buffer_hdr_t *hdr; > > - uint32_t index; > > - index = rem_buf_index(chunk); > > - hdr = index_to_hdr(pool, index); > > + if (buf->buf.type == ODP_BUFFER_TYPE_PACKET) { > > + packet_init(pool, &buf->pkt, size); > > > > - handle = hdr->handle; > > + if (pool->s.init_params.buf_init != NULL) > > + (*pool->s.init_params.buf_init) > > + (buf->buf.handle.handle, > > + pool->s.init_params.buf_init_arg); > > } > > > > - return handle.u32; > > + return odp_hdr_to_buf(&buf->buf); > > } > > > > - > > -void odp_buffer_free(odp_buffer_t buf) > > +odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl) > > { > > - odp_buffer_hdr_t *hdr; > > - uint32_t pool_id; > > - pool_entry_t *pool; > > - odp_buffer_chunk_hdr_t *chunk_hdr; > > - > > - hdr = odp_buf_to_hdr(buf); > > - pool_id = pool_handle_to_index(hdr->pool_hdl); > > - pool = get_pool_entry(pool_id); > > - chunk_hdr = local_chunk[pool_id]; > > - > > - if (chunk_hdr && chunk_hdr->chunk.num_bufs == ODP_BUFS_PER_CHUNK - > 1) { > > - /* Current chunk is full. Push back to the pool */ > > - LOCK(&pool->s.lock); > > - add_chunk(pool, chunk_hdr); > > - UNLOCK(&pool->s.lock); > > - chunk_hdr = NULL; > > - } > > - > > - if (chunk_hdr == NULL) { > > - /* Use this buffer */ > > - chunk_hdr = (odp_buffer_chunk_hdr_t *)hdr; > > - local_chunk[pool_id] = chunk_hdr; > > - chunk_hdr->chunk.num_bufs = 0; > > - } else { > > - /* Add to current chunk */ > > - add_buf_index(chunk_hdr, hdr->index); > > - } > > + return buffer_alloc(pool_hdl, > > + > odp_pool_to_entry(pool_hdl)->s.params.buf_size); > > } > > > > - > > -odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf) > > +void odp_buffer_free(odp_buffer_t buf) > > { > > - odp_buffer_hdr_t *hdr; > > - > > - hdr = odp_buf_to_hdr(buf); > > - return hdr->pool_hdl; > > + odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(buf); > > + pool_entry_t *pool = odp_buf_to_pool(buf_hdr); > > + ret_buf(&pool->s, buf_hdr); > > } > > > > - > > void odp_buffer_pool_print(odp_buffer_pool_t pool_hdl) > > { > > pool_entry_t *pool; > > - odp_buffer_chunk_hdr_t *chunk_hdr; > > - uint32_t i; > > uint32_t pool_id; > > > > pool_id = pool_handle_to_index(pool_hdl); > > pool = get_pool_entry(pool_id); > > > > - ODP_PRINT("Pool info\n"); > > - ODP_PRINT("---------\n"); > > - ODP_PRINT(" pool %i\n", pool->s.pool_hdl); > > - ODP_PRINT(" name %s\n", pool->s.name); > > - ODP_PRINT(" pool base %p\n", > pool->s.pool_base_addr); > > - ODP_PRINT(" buf base 0x%"PRIxPTR"\n", pool->s.buf_base); > > - ODP_PRINT(" pool size 0x%"PRIx64"\n", pool->s.pool_size); > > - ODP_PRINT(" buf size %zu\n", pool->s.user_size); > > - ODP_PRINT(" buf align %zu\n", pool->s.user_align); > > - ODP_PRINT(" hdr size %zu\n", pool->s.hdr_size); > > - ODP_PRINT(" alloc size %zu\n", pool->s.buf_size); > > - ODP_PRINT(" offset to hdr %zu\n", pool->s.buf_offset); > > - ODP_PRINT(" num bufs %"PRIu64"\n", pool->s.num_bufs); > > - ODP_PRINT(" free bufs %"PRIu64"\n", pool->s.free_bufs); > > - > > - /* first chunk */ > > - chunk_hdr = pool->s.head; > > - > > - if (chunk_hdr == NULL) { > > - ODP_ERR(" POOL EMPTY\n"); > > - return; > > - } > > - > > - ODP_PRINT("\n First chunk\n"); > > - > > - for (i = 0; i < chunk_hdr->chunk.num_bufs - 1; i++) { > > - uint32_t index; > > - odp_buffer_hdr_t *hdr; > > - > > - index = chunk_hdr->chunk.buf_index[i]; > > - hdr = index_to_hdr(pool, index); > > - > > - ODP_PRINT(" [%i] addr %p, id %"PRIu32"\n", i, hdr->addr, > > - index); > > - } > > - > > - ODP_PRINT(" [%i] addr %p, id %"PRIu32"\n", i, > chunk_hdr->buf_hdr.addr, > > - chunk_hdr->buf_hdr.index); > > - > > - /* next chunk */ > > - chunk_hdr = next_chunk(pool, chunk_hdr); > > + uint32_t bufcount = odp_atomic_load_u32(&pool->s.bufcount); > > + uint32_t blkcount = odp_atomic_load_u32(&pool->s.blkcount); > > + uint64_t bufallocs = odp_atomic_load_u64(&pool->s.bufallocs); > > + uint64_t buffrees = odp_atomic_load_u64(&pool->s.buffrees); > > + uint64_t blkallocs = odp_atomic_load_u64(&pool->s.blkallocs); > > + uint64_t blkfrees = odp_atomic_load_u64(&pool->s.blkfrees); > > + uint64_t bufempty = odp_atomic_load_u64(&pool->s.bufempty); > > + uint64_t blkempty = odp_atomic_load_u64(&pool->s.blkempty); > > + uint64_t hiwmct = odp_atomic_load_u64(&pool->s.high_wm_count); > > + uint64_t lowmct = odp_atomic_load_u64(&pool->s.low_wm_count); > > + > > + ODP_DBG("Pool info\n"); > > + ODP_DBG("---------\n"); > > + ODP_DBG(" pool %i\n", pool->s.pool_hdl); > > + ODP_DBG(" name %s\n", > > + pool->s.flags.has_name ? pool->s.name : "Unnamed Pool"); > > + ODP_DBG(" pool type %s\n", > > + pool->s.params.buf_type == ODP_BUFFER_TYPE_RAW ? "raw" : > > + (pool->s.params.buf_type == ODP_BUFFER_TYPE_PACKET ? > "packet" : > > + (pool->s.params.buf_type == ODP_BUFFER_TYPE_TIMEOUT ? > "timeout" : > > + (pool->s.params.buf_type == ODP_BUFFER_TYPE_ANY ? "any" : > > + "unknown")))); > > + ODP_DBG(" pool storage %sODP managed\n", > > + pool->s.flags.user_supplied_shm ? > > + "application provided, " : ""); > > + ODP_DBG(" pool status %s\n", > > + pool->s.flags.quiesced ? "quiesced" : "active"); > > + ODP_DBG(" pool opts %s, %s, %s\n", > > + pool->s.flags.unsegmented ? "unsegmented" : "segmented", > > + pool->s.flags.zeroized ? "zeroized" : "non-zeroized", > > + pool->s.flags.predefined ? "predefined" : "created"); > > + ODP_DBG(" pool base %p\n", pool->s.pool_base_addr); > > + ODP_DBG(" pool size %zu (%zu pages)\n", > > + pool->s.pool_size, pool->s.pool_size / ODP_PAGE_SIZE); > > + ODP_DBG(" udata size %zu\n", pool->s.init_params.udata_size); > > + ODP_DBG(" buf size %zu\n", pool->s.params.buf_size); > > + ODP_DBG(" num bufs %u\n", pool->s.params.num_bufs); > > + ODP_DBG(" bufs in use %u\n", bufcount); > > + ODP_DBG(" buf allocs %lu\n", bufallocs); > > + ODP_DBG(" buf frees %lu\n", buffrees); > > + ODP_DBG(" buf empty %lu\n", bufempty); > > + ODP_DBG(" blk size %zu\n", > > + pool->s.seg_size > ODP_MAX_INLINE_BUF ? pool->s.seg_size : > 0); > > + ODP_DBG(" blks available %u\n", blkcount); > > + ODP_DBG(" blk allocs %lu\n", blkallocs); > > + ODP_DBG(" blk frees %lu\n", blkfrees); > > + ODP_DBG(" blk empty %lu\n", blkempty); > > + ODP_DBG(" high wm count %lu\n", hiwmct); > > + ODP_DBG(" low wm count %lu\n", lowmct); > > +} > > > > - if (chunk_hdr) { > > - ODP_PRINT(" Next chunk\n"); > > - ODP_PRINT(" addr %p, id %"PRIu32"\n", > chunk_hdr->buf_hdr.addr, > > - chunk_hdr->buf_hdr.index); > > - } > > > > - ODP_PRINT("\n"); > > +odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf) > > +{ > > + return odp_buf_to_hdr(buf)->pool_hdl; > > } > > diff --git a/platform/linux-generic/odp_packet.c > b/platform/linux-generic/odp_packet.c > > index f8fd8ef..8deae3d 100644 > > --- a/platform/linux-generic/odp_packet.c > > +++ b/platform/linux-generic/odp_packet.c > > @@ -23,17 +23,9 @@ static inline uint8_t parse_ipv6(odp_packet_hdr_t > *pkt_hdr, > > void odp_packet_init(odp_packet_t pkt) > > { > > odp_packet_hdr_t *const pkt_hdr = odp_packet_hdr(pkt); > > - const size_t start_offset = ODP_FIELD_SIZEOF(odp_packet_hdr_t, > buf_hdr); > > - uint8_t *start; > > - size_t len; > > - > > - start = (uint8_t *)pkt_hdr + start_offset; > > - len = ODP_OFFSETOF(odp_packet_hdr_t, buf_data) - start_offset; > > - memset(start, 0, len); > > + pool_entry_t *pool = odp_buf_to_pool(&pkt_hdr->buf_hdr); > > > > - pkt_hdr->l2_offset = ODP_PACKET_OFFSET_INVALID; > > - pkt_hdr->l3_offset = ODP_PACKET_OFFSET_INVALID; > > - pkt_hdr->l4_offset = ODP_PACKET_OFFSET_INVALID; > > + packet_init(pool, pkt_hdr, 0); > > } > > > > odp_packet_t odp_packet_from_buffer(odp_buffer_t buf) > > @@ -63,7 +55,7 @@ uint8_t *odp_packet_addr(odp_packet_t pkt) > > > > uint8_t *odp_packet_data(odp_packet_t pkt) > > { > > - return odp_packet_addr(pkt) + odp_packet_hdr(pkt)->frame_offset; > > + return odp_packet_addr(pkt) + odp_packet_hdr(pkt)->headroom; > > } > > > > > > @@ -130,20 +122,13 @@ void odp_packet_set_l4_offset(odp_packet_t pkt, > size_t offset) > > > > int odp_packet_is_segmented(odp_packet_t pkt) > > { > > - odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)pkt); > > - > > - if (buf_hdr->scatter.num_bufs == 0) > > - return 0; > > - else > > - return 1; > > + return odp_packet_hdr(pkt)->buf_hdr.segcount > 1; > > } > > > > > > int odp_packet_seg_count(odp_packet_t pkt) > > { > > - odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)pkt); > > - > > - return (int)buf_hdr->scatter.num_bufs + 1; > > + return odp_packet_hdr(pkt)->buf_hdr.segcount; > > } > > > > > > @@ -169,7 +154,7 @@ void odp_packet_parse(odp_packet_t pkt, size_t len, > size_t frame_offset) > > uint8_t ip_proto = 0; > > > > pkt_hdr->input_flags.eth = 1; > > - pkt_hdr->frame_offset = frame_offset; > > + pkt_hdr->l2_offset = frame_offset; > > pkt_hdr->frame_len = len; > > > > if (len > ODPH_ETH_LEN_MAX) > > @@ -329,8 +314,6 @@ void odp_packet_print(odp_packet_t pkt) > > len += snprintf(&str[len], n-len, > > " output_flags 0x%x\n", hdr->output_flags.all); > > len += snprintf(&str[len], n-len, > > - " frame_offset %u\n", hdr->frame_offset); > > - len += snprintf(&str[len], n-len, > > " l2_offset %u\n", hdr->l2_offset); > > len += snprintf(&str[len], n-len, > > " l3_offset %u\n", hdr->l3_offset); > > @@ -357,14 +340,13 @@ int odp_packet_copy(odp_packet_t pkt_dst, > odp_packet_t pkt_src) > > if (pkt_dst == ODP_PACKET_INVALID || pkt_src == ODP_PACKET_INVALID) > > return -1; > > > > - if (pkt_hdr_dst->buf_hdr.size < > > - pkt_hdr_src->frame_len + pkt_hdr_src->frame_offset) > > + if (pkt_hdr_dst->buf_hdr.size < pkt_hdr_src->frame_len) > > return -1; > > > > /* Copy packet header */ > > start_dst = (uint8_t *)pkt_hdr_dst + start_offset; > > start_src = (uint8_t *)pkt_hdr_src + start_offset; > > - len = ODP_OFFSETOF(odp_packet_hdr_t, buf_data) - start_offset; > > + len = sizeof(odp_packet_hdr_t) - start_offset; > > memcpy(start_dst, start_src, len); > > > > /* Copy frame payload */ > > @@ -373,13 +355,6 @@ int odp_packet_copy(odp_packet_t pkt_dst, > odp_packet_t pkt_src) > > len = pkt_hdr_src->frame_len; > > memcpy(start_dst, start_src, len); > > > > - /* Copy useful things from the buffer header */ > > - pkt_hdr_dst->buf_hdr.cur_offset = pkt_hdr_src->buf_hdr.cur_offset; > > - > > - /* Create a copy of the scatter list */ > > - odp_buffer_copy_scatter(odp_packet_to_buffer(pkt_dst), > > - odp_packet_to_buffer(pkt_src)); > > - > > return 0; > > } > > > > diff --git a/platform/linux-generic/odp_queue.c > b/platform/linux-generic/odp_queue.c > > index 1318bcd..b68a7c7 100644 > > --- a/platform/linux-generic/odp_queue.c > > +++ b/platform/linux-generic/odp_queue.c > > @@ -11,6 +11,7 @@ > > #include <odp_buffer.h> > > #include <odp_buffer_internal.h> > > #include <odp_buffer_pool_internal.h> > > +#include <odp_buffer_inlines.h> > > #include <odp_internal.h> > > #include <odp_shared_memory.h> > > #include <odp_schedule_internal.h> > > diff --git a/platform/linux-generic/odp_schedule.c > b/platform/linux-generic/odp_schedule.c > > index cc84e11..a8f1938 100644 > > --- a/platform/linux-generic/odp_schedule.c > > +++ b/platform/linux-generic/odp_schedule.c > > @@ -83,8 +83,8 @@ int odp_schedule_init_global(void) > > { > > odp_shm_t shm; > > odp_buffer_pool_t pool; > > - void *pool_base; > > int i, j; > > + odp_buffer_pool_param_t params; > > > > ODP_DBG("Schedule init ... "); > > > > @@ -99,20 +99,12 @@ int odp_schedule_init_global(void) > > return -1; > > } > > > > - shm = odp_shm_reserve("odp_sched_pool", > > - SCHED_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > + params.buf_size = sizeof(queue_desc_t); > > + params.buf_align = ODP_CACHE_LINE_SIZE; > > + params.num_bufs = SCHED_POOL_SIZE/sizeof(queue_desc_t); > > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > > > - pool_base = odp_shm_addr(shm); > > - > > - if (pool_base == NULL) { > > - ODP_ERR("Schedule init: Shm reserve failed.\n"); > > - return -1; > > - } > > - > > - pool = odp_buffer_pool_create("odp_sched_pool", pool_base, > > - SCHED_POOL_SIZE, > sizeof(queue_desc_t), > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_RAW); > > + pool = odp_buffer_pool_create("odp_sched_pool", ODP_SHM_NULL, > ¶ms); > > > > if (pool == ODP_BUFFER_POOL_INVALID) { > > ODP_ERR("Schedule init: Pool create failed.\n"); > > diff --git a/platform/linux-generic/odp_timer.c > b/platform/linux-generic/odp_timer.c > > index 313c713..914cb58 100644 > > --- a/platform/linux-generic/odp_timer.c > > +++ b/platform/linux-generic/odp_timer.c > > @@ -5,9 +5,10 @@ > > */ > > > > #include <odp_timer.h> > > -#include <odp_timer_internal.h> > > #include <odp_time.h> > > #include <odp_buffer_pool_internal.h> > > +#include <odp_buffer_inlines.h> > > +#include <odp_timer_internal.h> > > #include <odp_internal.h> > > #include <odp_atomic.h> > > #include <odp_spinlock.h> > > diff --git a/test/api_test/odp_timer_ping.c > b/test/api_test/odp_timer_ping.c > > index 7704181..1566f4f 100644 > > --- a/test/api_test/odp_timer_ping.c > > +++ b/test/api_test/odp_timer_ping.c > > @@ -319,9 +319,8 @@ int main(int argc ODP_UNUSED, char *argv[] > ODP_UNUSED) > > ping_arg_t pingarg; > > odp_queue_t queue; > > odp_buffer_pool_t pool; > > - void *pool_base; > > int i; > > - odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > if (odp_test_global_init() != 0) > > return -1; > > @@ -334,14 +333,14 @@ int main(int argc ODP_UNUSED, char *argv[] > ODP_UNUSED) > > /* > > * Create message pool > > */ > > - shm = odp_shm_reserve("msg_pool", > > - MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > - pool_base = odp_shm_addr(shm); > > - > > - pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, > > - BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_RAW); > > + > > + params.buf_size = BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = MSG_POOL_SIZE/BUF_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > + > > + pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, ¶ms); > > + > > if (pool == ODP_BUFFER_POOL_INVALID) { > > LOG_ERR("Pool create failed.\n"); > > return -1; > > diff --git a/test/validation/odp_crypto.c b/test/validation/odp_crypto.c > > index 9342aca..e329b05 100644 > > --- a/test/validation/odp_crypto.c > > +++ b/test/validation/odp_crypto.c > > @@ -31,8 +31,7 @@ CU_SuiteInfo suites[] = { > > > > int main(void) > > { > > - odp_shm_t shm; > > - void *pool_base; > > + odp_buffer_pool_param_t params; > > odp_buffer_pool_t pool; > > odp_queue_t out_queue; > > > > @@ -42,21 +41,13 @@ int main(void) > > } > > odp_init_local(); > > > > - shm = odp_shm_reserve("shm_packet_pool", > > - SHM_PKT_POOL_SIZE, > > - ODP_CACHE_LINE_SIZE, 0); > > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > > > - pool_base = odp_shm_addr(shm); > > - if (!pool_base) { > > - fprintf(stderr, "Packet pool allocation failed.\n"); > > - return -1; > > - } > > + pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, > ¶ms); > > > > - pool = odp_buffer_pool_create("packet_pool", pool_base, > > - SHM_PKT_POOL_SIZE, > > - SHM_PKT_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_PACKET); > > if (ODP_BUFFER_POOL_INVALID == pool) { > > fprintf(stderr, "Packet pool creation failed.\n"); > > return -1; > > @@ -67,20 +58,14 @@ int main(void) > > fprintf(stderr, "Crypto outq creation failed.\n"); > > return -1; > > } > > - shm = odp_shm_reserve("shm_compl_pool", > > - SHM_COMPL_POOL_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_SHM_SW_ONLY); > > - pool_base = odp_shm_addr(shm); > > - if (!pool_base) { > > - fprintf(stderr, "Completion pool allocation failed.\n"); > > - return -1; > > - } > > - pool = odp_buffer_pool_create("compl_pool", pool_base, > > - SHM_COMPL_POOL_SIZE, > > - SHM_COMPL_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_RAW); > > + > > + params.buf_size = SHM_COMPL_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_COMPL_POOL_SIZE/SHM_COMPL_POOL_BUF_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > + > > + pool = odp_buffer_pool_create("compl_pool", ODP_SHM_NULL, ¶ms); > > + > > if (ODP_BUFFER_POOL_INVALID == pool) { > > fprintf(stderr, "Completion pool creation failed.\n"); > > return -1; > > diff --git a/test/validation/odp_queue.c b/test/validation/odp_queue.c > > index 09dba0e..9d0f3d7 100644 > > --- a/test/validation/odp_queue.c > > +++ b/test/validation/odp_queue.c > > @@ -16,21 +16,14 @@ static int queue_contest = 0xff; > > static int test_odp_buffer_pool_init(void) > > { > > odp_buffer_pool_t pool; > > - void *pool_base; > > - odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > - shm = odp_shm_reserve("msg_pool", > > - MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > + params.buf_size = 0; > > + params.buf_align = ODP_CACHE_LINE_SIZE; > > + params.num_bufs = 1024 * 10; > > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > > > - pool_base = odp_shm_addr(shm); > > - > > - if (NULL == pool_base) { > > - printf("Shared memory reserve failed.\n"); > > - return -1; > > - } > > - > > - pool = odp_buffer_pool_create("msg_pool", pool_base, > MSG_POOL_SIZE, 0, > > - ODP_CACHE_LINE_SIZE, > ODP_BUFFER_TYPE_RAW); > > + pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, ¶ms); > > > > if (ODP_BUFFER_POOL_INVALID == pool) { > > printf("Pool create failed.\n"); > > -- > > 1.8.3.2 > > > > > > _______________________________________________ > > lng-odp mailing list > > lng-odp@lists.linaro.org > > http://lists.linaro.org/mailman/listinfo/lng-odp > >
Fine. We can add segment head-room/tail-room support in the next version. Reviewed-by: Bala Manoharan <bala.manoharan@linaro.org> On Wednesday 03 December 2014 05:24 PM, Bill Fischofer wrote: > Segment-level headroom/tailroom is not part of this patch since Petri > has stated that we won't be doing that for v1.0. It can be added at a > later date if we decide we need it. > > On Wed, Dec 3, 2014 at 2:25 AM, Bala Manoharan > <bala.manoharan@linaro.org <mailto:bala.manoharan@linaro.org>> wrote: > > I have only a few minor comments for this patch. > One major query is that Currently this patch does not integrate > headroom/tailroom at segment level > is the feature available in 1.0? > if it is needed to be supported then we need some modifications in > segment handling. > > Regards, > Bala > On Tue, Dec 02, 2014 at 01:17:01PM -0600, Bill Fischofer wrote: > > Restructure ODP buffer pool internals to support new APIs. > > Implements new odp_buffer_pool_create() API. > > > > Signed-off-by: Bill Fischofer <bill.fischofer@linaro.org > <mailto:bill.fischofer@linaro.org>> > > --- > > example/generator/odp_generator.c | 19 +- > > example/ipsec/odp_ipsec.c | 57 +- > > example/l2fwd/odp_l2fwd.c | 19 +- > > example/odp_example/odp_example.c | 18 +- > > example/packet/odp_pktio.c | 19 +- > > example/timer/odp_timer_test.c | 13 +- > > .../linux-generic/include/api/odp_buffer_pool.h | 91 ++- > > platform/linux-generic/include/api/odp_config.h | 10 + > > .../linux-generic/include/api/odp_platform_types.h | 9 + > > .../linux-generic/include/api/odp_shared_memory.h | 10 +- > > .../linux-generic/include/odp_buffer_inlines.h | 157 +++++ > > .../linux-generic/include/odp_buffer_internal.h | 137 ++-- > > .../include/odp_buffer_pool_internal.h | 278 ++++++-- > > .../linux-generic/include/odp_packet_internal.h | 50 +- > > .../linux-generic/include/odp_timer_internal.h | 11 +- > > platform/linux-generic/odp_buffer.c | 31 +- > > platform/linux-generic/odp_buffer_pool.c | 711 > +++++++++------------ > > platform/linux-generic/odp_packet.c | 41 +- > > platform/linux-generic/odp_queue.c | 1 + > > platform/linux-generic/odp_schedule.c | 20 +- > > platform/linux-generic/odp_timer.c | 3 +- > > test/api_test/odp_timer_ping.c | 19 +- > > test/validation/odp_crypto.c | 43 +- > > test/validation/odp_queue.c | 19 +- > > 24 files changed, 1024 insertions(+), 762 deletions(-) > > create mode 100644 > platform/linux-generic/include/odp_buffer_inlines.h > > > > diff --git a/example/generator/odp_generator.c > b/example/generator/odp_generator.c > > index 73b0369..476cbef 100644 > > --- a/example/generator/odp_generator.c > > +++ b/example/generator/odp_generator.c > > @@ -522,11 +522,11 @@ int main(int argc, char *argv[]) > > odph_linux_pthread_t thread_tbl[MAX_WORKERS]; > > odp_buffer_pool_t pool; > > int num_workers; > > - void *pool_base; > > int i; > > int first_core; > > int core_count; > > odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > /* Init ODP before calling anything else */ > > if (odp_init_global(NULL, NULL)) { > > @@ -589,20 +589,13 @@ int main(int argc, char *argv[]) > > printf("First core: %i\n\n", first_core); > > > > /* Create packet pool */ > > - shm = odp_shm_reserve("shm_packet_pool", > > - SHM_PKT_POOL_SIZE, > ODP_CACHE_LINE_SIZE, 0); > > - pool_base = odp_shm_addr(shm); > > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > > > - if (pool_base == NULL) { > > - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); > > - exit(EXIT_FAILURE); > > - } > > + pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, > ¶ms); > > > > - pool = odp_buffer_pool_create("packet_pool", pool_base, > > - SHM_PKT_POOL_SIZE, > > - SHM_PKT_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_PACKET); > > if (pool == ODP_BUFFER_POOL_INVALID) { > > EXAMPLE_ERR("Error: packet pool create failed.\n"); > > exit(EXIT_FAILURE); > > diff --git a/example/ipsec/odp_ipsec.c b/example/ipsec/odp_ipsec.c > > index 76d27c5..f96338c 100644 > > --- a/example/ipsec/odp_ipsec.c > > +++ b/example/ipsec/odp_ipsec.c > > @@ -367,8 +367,7 @@ static > > void ipsec_init_pre(void) > > { > > odp_queue_param_t qparam; > > - void *pool_base; > > - odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > /* > > * Create queues > > @@ -401,16 +400,12 @@ void ipsec_init_pre(void) > > } > > > > /* Create output buffer pool */ > > - shm = odp_shm_reserve("shm_out_pool", > > - SHM_OUT_POOL_SIZE, > ODP_CACHE_LINE_SIZE, 0); > > - > > - pool_base = odp_shm_addr(shm); > > + params.buf_size = SHM_OUT_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_PKT_POOL_BUF_COUNT; > > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > > > - out_pool = odp_buffer_pool_create("out_pool", pool_base, > > - SHM_OUT_POOL_SIZE, > > - SHM_OUT_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_PACKET); > > + out_pool = odp_buffer_pool_create("out_pool", > ODP_SHM_NULL, ¶ms); > > > > if (ODP_BUFFER_POOL_INVALID == out_pool) { > > EXAMPLE_ERR("Error: message pool create failed.\n"); > > @@ -1176,12 +1171,12 @@ main(int argc, char *argv[]) > > { > > odph_linux_pthread_t thread_tbl[MAX_WORKERS]; > > int num_workers; > > - void *pool_base; > > int i; > > int first_core; > > int core_count; > > int stream_count; > > odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > /* Init ODP before calling anything else */ > > if (odp_init_global(NULL, NULL)) { > > @@ -1241,42 +1236,28 @@ main(int argc, char *argv[]) > > printf("First core: %i\n\n", first_core); > > > > /* Create packet buffer pool */ > > - shm = odp_shm_reserve("shm_packet_pool", > > - SHM_PKT_POOL_SIZE, > ODP_CACHE_LINE_SIZE, 0); > > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_PKT_POOL_BUF_COUNT; > > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > > > - pool_base = odp_shm_addr(shm); > > - > > - if (NULL == pool_base) { > > - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); > > - exit(EXIT_FAILURE); > > - } > > + pkt_pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, > > + ¶ms); > > > > - pkt_pool = odp_buffer_pool_create("packet_pool", pool_base, > > - SHM_PKT_POOL_SIZE, > > - SHM_PKT_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_PACKET); > > if (ODP_BUFFER_POOL_INVALID == pkt_pool) { > > EXAMPLE_ERR("Error: packet pool create failed.\n"); > > exit(EXIT_FAILURE); > > } > > > > /* Create context buffer pool */ > > - shm = odp_shm_reserve("shm_ctx_pool", > > - SHM_CTX_POOL_SIZE, > ODP_CACHE_LINE_SIZE, 0); > > - > > - pool_base = odp_shm_addr(shm); > > + params.buf_size = SHM_CTX_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_CTX_POOL_BUF_COUNT; > > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > > > - if (NULL == pool_base) { > > - EXAMPLE_ERR("Error: context pool mem alloc > failed.\n"); > > - exit(EXIT_FAILURE); > > - } > > + ctx_pool = odp_buffer_pool_create("ctx_pool", ODP_SHM_NULL, > > + ¶ms); > > > > - ctx_pool = odp_buffer_pool_create("ctx_pool", pool_base, > > - SHM_CTX_POOL_SIZE, > > - SHM_CTX_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_RAW); > > if (ODP_BUFFER_POOL_INVALID == ctx_pool) { > > EXAMPLE_ERR("Error: context pool create failed.\n"); > > exit(EXIT_FAILURE); > > diff --git a/example/l2fwd/odp_l2fwd.c b/example/l2fwd/odp_l2fwd.c > > index ebac8c5..3c1fd6a 100644 > > --- a/example/l2fwd/odp_l2fwd.c > > +++ b/example/l2fwd/odp_l2fwd.c > > @@ -314,12 +314,12 @@ int main(int argc, char *argv[]) > > { > > odph_linux_pthread_t thread_tbl[MAX_WORKERS]; > > odp_buffer_pool_t pool; > > - void *pool_base; > > int i; > > int first_core; > > int core_count; > > odp_pktio_t pktio; > > odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > /* Init ODP before calling anything else */ > > if (odp_init_global(NULL, NULL)) { > > @@ -383,20 +383,13 @@ int main(int argc, char *argv[]) > > printf("First core: %i\n\n", first_core); > > > > /* Create packet pool */ > > - shm = odp_shm_reserve("shm_packet_pool", > > - SHM_PKT_POOL_SIZE, > ODP_CACHE_LINE_SIZE, 0); > > - pool_base = odp_shm_addr(shm); > > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > > > - if (pool_base == NULL) { > > - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); > > - exit(EXIT_FAILURE); > > - } > > + pool = odp_buffer_pool_create("packet pool", ODP_SHM_NULL, > ¶ms); > > > > - pool = odp_buffer_pool_create("packet_pool", pool_base, > > - SHM_PKT_POOL_SIZE, > > - SHM_PKT_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_PACKET); > > if (pool == ODP_BUFFER_POOL_INVALID) { > > EXAMPLE_ERR("Error: packet pool create failed.\n"); > > exit(EXIT_FAILURE); > > diff --git a/example/odp_example/odp_example.c > b/example/odp_example/odp_example.c > > index 96a2912..8373f12 100644 > > --- a/example/odp_example/odp_example.c > > +++ b/example/odp_example/odp_example.c > > @@ -954,13 +954,13 @@ int main(int argc, char *argv[]) > > test_args_t args; > > int num_workers; > > odp_buffer_pool_t pool; > > - void *pool_base; > > odp_queue_t queue; > > int i, j; > > int prios; > > int first_core; > > odp_shm_t shm; > > test_globals_t *globals; > > + odp_buffer_pool_param_t params; > > > > printf("\nODP example starts\n\n"); > > > > @@ -1042,19 +1042,13 @@ int main(int argc, char *argv[]) > > /* > > * Create message pool > > */ > > - shm = odp_shm_reserve("msg_pool", > > - MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > > > - pool_base = odp_shm_addr(shm); > > + params.buf_size = sizeof(test_message_t); > > + params.buf_align = 0; > > + params.num_bufs = MSG_POOL_SIZE/sizeof(test_message_t); > > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > > > - if (pool_base == NULL) { > > - EXAMPLE_ERR("Shared memory reserve failed.\n"); > > - return -1; > > - } > > - > > - pool = odp_buffer_pool_create("msg_pool", pool_base, > MSG_POOL_SIZE, > > - sizeof(test_message_t), > > - ODP_CACHE_LINE_SIZE, ODP_BUFFER_TYPE_RAW); > > + pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, > ¶ms); > > > > if (pool == ODP_BUFFER_POOL_INVALID) { > > EXAMPLE_ERR("Pool create failed.\n"); > > diff --git a/example/packet/odp_pktio.c b/example/packet/odp_pktio.c > > index 1763c84..27318d4 100644 > > --- a/example/packet/odp_pktio.c > > +++ b/example/packet/odp_pktio.c > > @@ -331,11 +331,11 @@ int main(int argc, char *argv[]) > > odph_linux_pthread_t thread_tbl[MAX_WORKERS]; > > odp_buffer_pool_t pool; > > int num_workers; > > - void *pool_base; > > int i; > > int first_core; > > int core_count; > > odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > /* Init ODP before calling anything else */ > > if (odp_init_global(NULL, NULL)) { > > @@ -389,20 +389,13 @@ int main(int argc, char *argv[]) > > printf("First core: %i\n\n", first_core); > > > > /* Create packet pool */ > > - shm = odp_shm_reserve("shm_packet_pool", > > - SHM_PKT_POOL_SIZE, > ODP_CACHE_LINE_SIZE, 0); > > - pool_base = odp_shm_addr(shm); > > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > > > - if (pool_base == NULL) { > > - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); > > - exit(EXIT_FAILURE); > > - } > > + pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, > ¶ms); > > > > - pool = odp_buffer_pool_create("packet_pool", pool_base, > > - SHM_PKT_POOL_SIZE, > > - SHM_PKT_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_PACKET); > > if (pool == ODP_BUFFER_POOL_INVALID) { > > EXAMPLE_ERR("Error: packet pool create failed.\n"); > > exit(EXIT_FAILURE); > > diff --git a/example/timer/odp_timer_test.c > b/example/timer/odp_timer_test.c > > index 9968bfe..0d6e31a 100644 > > --- a/example/timer/odp_timer_test.c > > +++ b/example/timer/odp_timer_test.c > > @@ -244,12 +244,12 @@ int main(int argc, char *argv[]) > > test_args_t args; > > int num_workers; > > odp_buffer_pool_t pool; > > - void *pool_base; > > odp_queue_t queue; > > int first_core; > > uint64_t cycles, ns; > > odp_queue_param_t param; > > odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > printf("\nODP timer example starts\n"); > > > > @@ -313,12 +313,13 @@ int main(int argc, char *argv[]) > > */ > > shm = odp_shm_reserve("msg_pool", > > MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > - pool_base = odp_shm_addr(shm); > > > > - pool = odp_buffer_pool_create("msg_pool", pool_base, > MSG_POOL_SIZE, > > - 0, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_TIMEOUT); > > + params.buf_size = 0; > > + params.buf_align = 0; > > + params.num_bufs = MSG_POOL_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_TIMEOUT; > > + > > + pool = odp_buffer_pool_create("msg_pool", shm, ¶ms); > > > > if (pool == ODP_BUFFER_POOL_INVALID) { > > EXAMPLE_ERR("Pool create failed.\n"); > > diff --git > a/platform/linux-generic/include/api/odp_buffer_pool.h > b/platform/linux-generic/include/api/odp_buffer_pool.h > > index 30b83e0..7022daa 100644 > > --- a/platform/linux-generic/include/api/odp_buffer_pool.h > > +++ b/platform/linux-generic/include/api/odp_buffer_pool.h > > @@ -36,32 +36,101 @@ extern "C" { > > #define ODP_BUFFER_POOL_INVALID 0 > > > > /** > > + * Buffer pool parameters > > + * Used to communicate buffer pool creation options. > > + */ > > +typedef struct odp_buffer_pool_param_t { > > + size_t buf_size; /**< Buffer size in bytes. The maximum > > + number of bytes application will > > + store in each buffer. */ > > + size_t buf_align; /**< Minimum buffer alignment in bytes. > > + Valid values are powers of two. Use 0 > > + for default alignment. Default will > > + always be a multiple of 8. */ > > + uint32_t num_bufs; /**< Number of buffers in the pool */ > > + int buf_type; /**< Buffer type */ > > +} odp_buffer_pool_param_t; > > + > > +/** > > * Create a buffer pool > > + * This routine is used to create a buffer pool. It take three > > + * arguments: the optional name of the pool to be created, an > optional shared > > + * memory handle, and a parameter struct that describes the > pool to be > > + * created. If a name is not specified the result is an > anonymous pool that > > + * cannot be referenced by odp_buffer_pool_lookup(). > > * > > - * @param name Name of the pool (max > ODP_BUFFER_POOL_NAME_LEN - 1 chars) > > - * @param base_addr Pool base address > > - * @param size Pool size in bytes > > - * @param buf_size Buffer size in bytes > > - * @param buf_align Minimum buffer alignment > > - * @param buf_type Buffer type > > + * @param[in] name Name of the pool, max > ODP_BUFFER_POOL_NAME_LEN-1 chars. > > + * May be specified as NULL for anonymous > pools. > > * > > - * @return Buffer pool handle > > + * @param[in] shm The shared memory object in which to > create the pool. > > + * Use ODP_SHM_NULL to reserve default > memory type > > + * for the buffer type. > > + * > > + * @param[in] params Buffer pool parameters. > > + * > > + * @return Buffer pool handle or ODP_BUFFER_POOL_INVALID if > call failed. > > */ > > + > > odp_buffer_pool_t odp_buffer_pool_create(const char *name, > > - void *base_addr, uint64_t > size, > > - size_t buf_size, size_t > buf_align, > > - int buf_type); > > + odp_shm_t shm, > > + odp_buffer_pool_param_t *params); > > > > +/** > > + * Destroy a buffer pool previously created by > odp_buffer_pool_create() > > + * > > + * @param[in] pool Handle of the buffer pool to be destroyed > > + * > > + * @return 0 on Success, -1 on Failure. > > + * > > + * @note This routine destroys a previously created buffer > pool. This call > > + * does not destroy any shared memory object passed to > > + * odp_buffer_pool_create() used to store the buffer pool > contents. The caller > > + * takes responsibility for that. If no shared memory object > was passed as > > + * part of the create call, then this routine will destroy any > internal shared > > + * memory objects associated with the buffer pool. Results are > undefined if > > + * an attempt is made to destroy a buffer pool that contains > allocated or > > + * otherwise active buffers. > > + */ > > +int odp_buffer_pool_destroy(odp_buffer_pool_t pool); > > > > /** > > * Find a buffer pool by name > > * > > - * @param name Name of the pool > > + * @param[in] name Name of the pool > > * > > * @return Buffer pool handle, or ODP_BUFFER_POOL_INVALID if > not found. > > + * > > + * @note This routine cannot be used to look up an anonymous > pool (one created > > + * with no name). > > */ > > odp_buffer_pool_t odp_buffer_pool_lookup(const char *name); > > > > +/** > > + * Buffer pool information struct > > + * Used to get information about a buffer pool. > > + */ > > +typedef struct odp_buffer_pool_info_t { > > + const char *name; /**< pool name */ > > + odp_buffer_pool_param_t params; /**< pool parameters */ > > +} odp_buffer_pool_info_t; > > + > > +/** > > + * Retrieve information about a buffer pool > > + * > > + * @param[in] pool Buffer pool handle > > + * > > + * @param[out] shm Recieves odp_shm_t supplied by caller at > > + * pool creation, or ODP_SHM_NULL if the > > + * pool is managed internally. > > + * > > + * @param[out] info Receives an odp_buffer_pool_info_t object > > + * that describes the pool. > > + * > > + * @return 0 on success, -1 if info could not be retrieved. > > + */ > > + > > +int odp_buffer_pool_info(odp_buffer_pool_t pool, odp_shm_t *shm, > > + odp_buffer_pool_info_t *info); > > > > /** > > * Print buffer pool info > > diff --git a/platform/linux-generic/include/api/odp_config.h > b/platform/linux-generic/include/api/odp_config.h > > index 906897c..1226d37 100644 > > --- a/platform/linux-generic/include/api/odp_config.h > > +++ b/platform/linux-generic/include/api/odp_config.h > > @@ -49,6 +49,16 @@ extern "C" { > > #define ODP_CONFIG_PKTIO_ENTRIES 64 > > > > /** > > + * Segment size to use - > > + */ > > +#define ODP_CONFIG_BUF_SEG_SIZE (512*3) > > + > > +/** > > + * Maximum buffer size supported > > + */ > > +#define ODP_CONFIG_BUF_MAX_SIZE (ODP_CONFIG_BUF_SEG_SIZE*7) > > + > > +/** > > * @} > > */ > > > > diff --git > a/platform/linux-generic/include/api/odp_platform_types.h > b/platform/linux-generic/include/api/odp_platform_types.h > > index 4db47d3..b9b3aea 100644 > > --- a/platform/linux-generic/include/api/odp_platform_types.h > > +++ b/platform/linux-generic/include/api/odp_platform_types.h > > @@ -65,6 +65,15 @@ typedef uint32_t odp_pktio_t; > > #define ODP_PKTIO_ANY ((odp_pktio_t)~0) > > > > /** > > + * ODP shared memory block > > + */ > > +typedef uint32_t odp_shm_t; > > + > > +/** Invalid shared memory block */ > > +#define ODP_SHM_INVALID 0 > > +#define ODP_SHM_NULL ODP_SHM_INVALID /**< Synonym for buffer > pool use */ > > + > > +/** > > * @} > > */ > > > > diff --git > a/platform/linux-generic/include/api/odp_shared_memory.h > b/platform/linux-generic/include/api/odp_shared_memory.h > > index 26e208b..f70db5a 100644 > > --- a/platform/linux-generic/include/api/odp_shared_memory.h > > +++ b/platform/linux-generic/include/api/odp_shared_memory.h > > @@ -20,6 +20,7 @@ extern "C" { > > > > > > #include <odp_std_types.h> > > +#include <odp_platform_types.h> > > > > /** @defgroup odp_shared_memory ODP SHARED MEMORY > > * Operations on shared memory. > > @@ -38,15 +39,6 @@ extern "C" { > > #define ODP_SHM_PROC 0x2 /**< Share with external processes */ > > > > /** > > - * ODP shared memory block > > - */ > > -typedef uint32_t odp_shm_t; > > - > > -/** Invalid shared memory block */ > > -#define ODP_SHM_INVALID 0 > > - > > - > > -/** > > * Shared memory block info > > */ > > typedef struct odp_shm_info_t { > > diff --git a/platform/linux-generic/include/odp_buffer_inlines.h > b/platform/linux-generic/include/odp_buffer_inlines.h > > new file mode 100644 > > index 0000000..f33b41d > > --- /dev/null > > +++ b/platform/linux-generic/include/odp_buffer_inlines.h > > @@ -0,0 +1,157 @@ > > +/* Copyright (c) 2014, Linaro Limited > > + * All rights reserved. > > + * > > + * SPDX-License-Identifier: BSD-3-Clause > > + */ > > + > > +/** > > + * @file > > + * > > + * Inline functions for ODP buffer mgmt routines - > implementation internal > > + */ > > + > > +#ifndef ODP_BUFFER_INLINES_H_ > > +#define ODP_BUFFER_INLINES_H_ > > + > > +#ifdef __cplusplus > > +extern "C" { > > +#endif > > + > > +static inline odp_buffer_t > odp_buffer_encode_handle(odp_buffer_hdr_t *hdr) > > +{ > > + odp_buffer_bits_t handle; > > + uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl); > > + struct pool_entry_s *pool = get_pool_entry(pool_id); > > + > > + handle.pool_id = pool_id; > > + handle.index = ((uint8_t *)hdr - pool->pool_base_addr) / > > + ODP_CACHE_LINE_SIZE; > > + handle.seg = 0; > > + > > + return handle.u32; > > +} > > + > > +static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) > > +{ > > + odp_buffer_t hdl = odp_buffer_encode_handle(hdr); > > + if (hdl != hdr->handle.handle) { > > + ODP_DBG("buf %p should have handle %x but is > cached as %x\n", > > + hdr, hdl, hdr->handle.handle); > > + hdr->handle.handle = hdl; > > + } > > + return hdr->handle.handle; > > +} > > + > > +static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > > +{ > > + odp_buffer_bits_t handle; > > + uint32_t pool_id; > > + uint32_t index; > > + struct pool_entry_s *pool; > > + > > + handle.u32 = buf; > > + pool_id = handle.pool_id; > > + index = handle.index; > > + > > +#ifdef POOL_ERROR_CHECK > > + if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { > > + ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); > > + return NULL; > > + } > > +#endif > > + > > + pool = get_pool_entry(pool_id); > > + > > +#ifdef POOL_ERROR_CHECK > > + if (odp_unlikely(index > pool->params.num_bufs - 1)) { > > + ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); > > + return NULL; > > + } > > +#endif > > + > > + return (odp_buffer_hdr_t *)(void *) > > + (pool->pool_base_addr + (index * > ODP_CACHE_LINE_SIZE)); > > +} > > + > > +static inline uint32_t odp_buffer_refcount(odp_buffer_hdr_t *buf) > > +{ > > + return odp_atomic_load_u32(&buf->ref_count); > > +} > > + > > +static inline uint32_t > odp_buffer_incr_refcount(odp_buffer_hdr_t *buf, > > + uint32_t val) > > +{ > > + return odp_atomic_fetch_add_u32(&buf->ref_count, val) + val; > > +} > > + > > +static inline uint32_t > odp_buffer_decr_refcount(odp_buffer_hdr_t *buf, > > + uint32_t val) > > +{ > > + uint32_t tmp; > > + > > + tmp = odp_atomic_fetch_sub_u32(&buf->ref_count, val); > > + > > + if (tmp < val) { > > + odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); > > + return 0; > > + } else { > > + return tmp - val; > > + } > > +} > > + > > +static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) > > +{ > > + odp_buffer_bits_t handle; > > + odp_buffer_hdr_t *buf_hdr; > > + handle.u32 = buf; > > + > > + /* For buffer handles, segment index must be 0 */ > > + if (handle.seg != 0) > > + return NULL; > > + > > + pool_entry_t *pool = odp_pool_to_entry(handle.pool_id); > > + > > + /* If pool not created, handle is invalid */ > > + if (pool->s.pool_shm == ODP_SHM_INVALID) > > + return NULL; > > + > > + uint32_t buf_stride = pool->s.buf_stride / > ODP_CACHE_LINE_SIZE; > > + > > + /* A valid buffer index must be on stride, and must be in > range */ > > + if ((handle.index % buf_stride != 0) || > > + ((uint32_t)(handle.index / buf_stride) >= > pool->s.params.num_bufs)) > > + return NULL; > > + > > + buf_hdr = (odp_buffer_hdr_t *)(void *) > > + (pool->s.pool_base_addr + > > + (handle.index * ODP_CACHE_LINE_SIZE)); > > + > > + /* Handle is valid, so buffer is valid if it is allocated */ > > + if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) > > + return NULL; > > + else > > + return buf_hdr; > > +} > > + > > +int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > > + > > +static inline void *buffer_map(odp_buffer_hdr_t *buf, > > + size_t offset, > > + size_t *seglen, > > + size_t limit) > > +{ > > + int seg_index = offset / buf->segsize; > We are currently discussing the use of headroom/tailroom per segments > if that is the case then we cannot assume the seg_index directly > using the above formula > > + int seg_offset = offset % buf->segsize; > > + size_t buf_left = limit - offset; > Maybe we need the error check for buf->total_size > offset > > + > > + *seglen = buf_left < buf->segsize ?w > > + buf_left : buf->segsize - seg_offset; > > + > > + return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]); > > +} > > + > > +#ifdef __cplusplus > > +} > > +#endif > > + > > +#endif > > diff --git > a/platform/linux-generic/include/odp_buffer_internal.h > b/platform/linux-generic/include/odp_buffer_internal.h > > index 0027bfc..29666db 100644 > > --- a/platform/linux-generic/include/odp_buffer_internal.h > > +++ b/platform/linux-generic/include/odp_buffer_internal.h > > @@ -24,99 +24,118 @@ extern "C" { > > #include <odp_buffer.h> > > #include <odp_debug.h> > > #include <odp_align.h> > > - > > -/* TODO: move these to correct files */ > > - > > -typedef uint64_t odp_phys_addr_t; > > - > > -#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > > -#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > > - > > -#define ODP_BUFS_PER_CHUNK 16 > > -#define ODP_BUFS_PER_SCATTER 4 > > - > > -#define ODP_BUFFER_TYPE_CHUNK 0xffff > > - > > +#include <odp_config.h> > > +#include <odp_byteorder.h> > > +#include <odp_thread.h> > > + > > + > > +#define ODP_BUFFER_MAX_SEG > (ODP_CONFIG_BUF_MAX_SIZE/ODP_CONFIG_BUF_SEG_SIZE) > > +#define ODP_MAX_INLINE_BUF (sizeof(void *) * > (ODP_BUFFER_MAX_SEG - 1)) > > + > > +ODP_STATIC_ASSERT((ODP_CONFIG_BUF_SEG_SIZE % > ODP_CACHE_LINE_SIZE) == 0, > > + "ODP Segment size must be a multiple of cache > line size"); > > + > > +#define ODP_SEGBITS(x) \ > > + ((x) < 2 ? 1 : \ > > + ((x) < 4 ? 2 : \ > > + ((x) < 8 ? 3 : \ > > + ((x) < 16 ? 4 : \ > > + ((x) < 32 ? 5 : \ > > + ((x) < 64 ? 6 : \ > > + ((x) < 128 ? 7 : \ > > + ((x) < 256 ? 8 : \ > > + ((x) < 512 ? 9 : \ > > + ((x) < 1024 ? 10 : \ > > + ((x) < 2048 ? 11 : \ > > + ((x) < 4096 ? 12 : \ > > + (0/0))))))))))))) > > + > > +ODP_STATIC_ASSERT(ODP_SEGBITS(ODP_BUFFER_MAX_SEG) < > > + ODP_SEGBITS(ODP_CACHE_LINE_SIZE), > > + "Number of segments must not exceed log of cache > line size"); > > > > #define ODP_BUFFER_POOL_BITS 4 > > -#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS) > > +#define ODP_BUFFER_SEG_BITS ODP_SEGBITS(ODP_CACHE_LINE_SIZE) > > +#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS - > ODP_BUFFER_SEG_BITS) > > +#define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS + > ODP_BUFFER_INDEX_BITS) > > #define ODP_BUFFER_MAX_POOLS (1 << ODP_BUFFER_POOL_BITS) > > #define ODP_BUFFER_MAX_BUFFERS (1 << ODP_BUFFER_INDEX_BITS) > > > > +#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > > +#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > > + > > typedef union odp_buffer_bits_t { > > uint32_t u32; > > odp_buffer_t handle; > > > > struct { > > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > > uint32_t pool_id:ODP_BUFFER_POOL_BITS; > > uint32_t index:ODP_BUFFER_INDEX_BITS; > > + uint32_t seg:ODP_BUFFER_SEG_BITS; > > +#else > > + uint32_t seg:ODP_BUFFER_SEG_BITS; > > + uint32_t index:ODP_BUFFER_INDEX_BITS; > > + uint32_t pool_id:ODP_BUFFER_POOL_BITS; > > +#endif > > }; > > -} odp_buffer_bits_t; > > > > + struct { > > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > > +#else > > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > > +#endif > > + }; > > +} odp_buffer_bits_t; > > > > /* forward declaration */ > > struct odp_buffer_hdr_t; > > > > - > > -/* > > - * Scatter/gather list of buffers > > - */ > > -typedef struct odp_buffer_scatter_t { > > - /* buffer pointers */ > > - struct odp_buffer_hdr_t *buf[ODP_BUFS_PER_SCATTER]; > > - int num_bufs; /* num buffers */ > > - int pos; /* position on the > list */ > > - size_t total_len; /* Total length */ > > -} odp_buffer_scatter_t; > > - > > - > > -/* > > - * Chunk of buffers (in single pool) > > - */ > > -typedef struct odp_buffer_chunk_t { > > - uint32_t num_bufs; /* num buffers */ > > - uint32_t buf_index[ODP_BUFS_PER_CHUNK]; /* buffers */ > > -} odp_buffer_chunk_t; > > - > > - > > /* Common buffer header */ > > typedef struct odp_buffer_hdr_t { > > struct odp_buffer_hdr_t *next; /* next buf in a list */ > > + int allocator; /* allocating thread > id */ > > odp_buffer_bits_t handle; /* handle */ > > - odp_phys_addr_t phys_addr; /* physical data > start address */ > > - void *addr; /* virtual data start > address */ > > - uint32_t index; /* buf index in the > pool */ > > + union { > > + uint32_t all; > > + struct { > > + uint32_t zeroized:1; /* Zeroize buf data > on free */ > > + uint32_t hdrdata:1; /* Data is in buffer > hdr */ > > + }; > > + } flags; > > + int type; /* buffer type */ > > size_t size; /* max data size */ > > - size_t cur_offset; /* current offset */ > > odp_atomic_u32_t ref_count; /* reference count */ > > - odp_buffer_scatter_t scatter; /* Scatter/gather list */ > > - int type; /* type of next header */ > > odp_buffer_pool_t pool_hdl; /* buffer pool handle */ > > - > > + union { > > + void *buf_ctx; /* user context */ > > + void *udata_addr; /* user metadata addr */ > > + }; > > + size_t udata_size; /* size of user > metadata */ > > + uint32_t segcount; /* segment count */ > > + uint32_t segsize; /* segment size */ > > + void *addr[ODP_BUFFER_MAX_SEG]; /* block addrs */ > > } odp_buffer_hdr_t; > > > > -/* Ensure next header starts from 8 byte align */ > > -ODP_STATIC_ASSERT((sizeof(odp_buffer_hdr_t) % 8) == 0, > "ODP_BUFFER_HDR_T__SIZE_ERROR"); > > +typedef struct odp_buffer_hdr_stride { > > + uint8_t > pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t))]; > > +} odp_buffer_hdr_stride; > > > > +typedef struct odp_buf_blk_t { > > + struct odp_buf_blk_t *next; > > + struct odp_buf_blk_t *prev; > > +} odp_buf_blk_t; > > > > /* Raw buffer header */ > > typedef struct { > > odp_buffer_hdr_t buf_hdr; /* common buffer header */ > > - uint8_t buf_data[]; /* start of buffer data area */ > > } odp_raw_buffer_hdr_t; > > > > - > > -/* Chunk header */ > > -typedef struct odp_buffer_chunk_hdr_t { > > - odp_buffer_hdr_t buf_hdr; > > - odp_buffer_chunk_t chunk; > > -} odp_buffer_chunk_hdr_t; > > - > > - > > -int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > > - > > -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t > buf_src); > > - > > +/* Forward declarations */ > > +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool, size_t size); > > > > #ifdef __cplusplus > > } > > diff --git > a/platform/linux-generic/include/odp_buffer_pool_internal.h > b/platform/linux-generic/include/odp_buffer_pool_internal.h > > index e0210bd..cd58f91 100644 > > --- a/platform/linux-generic/include/odp_buffer_pool_internal.h > > +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h > > @@ -25,6 +25,35 @@ extern "C" { > > #include <odp_hints.h> > > #include <odp_config.h> > > #include <odp_debug.h> > > +#include <odp_shared_memory.h> > > +#include <odp_atomic.h> > > +#include <odp_atomic_internal.h> > > +#include <string.h> > > + > > +/** > > + * Buffer initialization routine prototype > > + * > > + * @note Routines of this type MAY be passed as part of the > > + * _odp_buffer_pool_init_t structure to be called whenever a > > + * buffer is allocated to initialize the user metadata > > + * associated with that buffer. > > + */ > > +typedef void (_odp_buf_init_t)(odp_buffer_t buf, void > *buf_init_arg); > > + > > +/** > > + * Buffer pool initialization parameters > > + * > > + * @param[in] udata_size Size of the user metadata for each > buffer > > + * @param[in] buf_init Function pointer to be called to > initialize the > > + * user metadata for each buffer in > the pool. > > + * @param[in] buf_init_arg Argument to be passed to buf_init(). > > + * > > + */ > > +typedef struct _odp_buffer_pool_init_t { > > + size_t udata_size; /**< Size of user metadata for > each buffer */ > > + _odp_buf_init_t *buf_init; /**< Buffer initialization > routine to use */ > > + void *buf_init_arg; /**< Argument to be passed to > buf_init() */ > > +} _odp_buffer_pool_init_t; /**< Type of buffer > initialization struct */ > > > > /* Use ticketlock instead of spinlock */ > > #define POOL_USE_TICKETLOCK > > @@ -39,6 +68,17 @@ extern "C" { > > #include <odp_spinlock.h> > > #endif > > > > +#ifdef POOL_USE_TICKETLOCK > > +#include <odp_ticketlock.h> > > +#define LOCK(a) odp_ticketlock_lock(a) > > +#define UNLOCK(a) odp_ticketlock_unlock(a) > > +#define LOCK_INIT(a) odp_ticketlock_init(a) > > +#else > > +#include <odp_spinlock.h> > > +#define LOCK(a) odp_spinlock_lock(a) > > +#define UNLOCK(a) odp_spinlock_unlock(a) > > +#define LOCK_INIT(a) odp_spinlock_init(a) > > +#endif > > > > struct pool_entry_s { > > #ifdef POOL_USE_TICKETLOCK > > @@ -47,66 +87,224 @@ struct pool_entry_s { > > odp_spinlock_t lock ODP_ALIGNED_CACHE; > > #endif > > > > - odp_buffer_chunk_hdr_t *head; > > - uint64_t free_bufs; > > char name[ODP_BUFFER_POOL_NAME_LEN]; > > - > > - odp_buffer_pool_t pool_hdl ODP_ALIGNED_CACHE; > > - uintptr_t buf_base; > > - size_t buf_size; > > - size_t buf_offset; > > - uint64_t num_bufs; > > - void *pool_base_addr; > > - uint64_t pool_size; > > - size_t user_size; > > - size_t user_align; > > - int buf_type; > > - size_t hdr_size; > > + odp_buffer_pool_param_t params; > > + _odp_buffer_pool_init_t init_params; > > + odp_buffer_pool_t pool_hdl; > > + odp_shm_t pool_shm; > > + union { > > + uint32_t all; > > + struct { > > + uint32_t has_name:1; > > + uint32_t user_supplied_shm:1; > > + uint32_t unsegmented:1; > > + uint32_t zeroized:1; > > + uint32_t quiesced:1; > > + uint32_t low_wm_assert:1; > > + uint32_t predefined:1; > > + }; > > + } flags; > > + uint8_t *pool_base_addr; > > + size_t pool_size; > > + uint32_t buf_stride; > > + _odp_atomic_ptr_t buf_freelist; > Minor: Consider renaming it as seg_freelist as pool is a > collection of segments > and Buffer is a logical term. > > + _odp_atomic_ptr_t blk_freelist; > > + odp_atomic_u32_t bufcount; > > + odp_atomic_u32_t blkcount; > > + odp_atomic_u64_t bufallocs; > > + odp_atomic_u64_t buffrees; > > + odp_atomic_u64_t blkallocs; > > + odp_atomic_u64_t blkfrees; > > + odp_atomic_u64_t bufempty; > > + odp_atomic_u64_t blkempty; > > + odp_atomic_u64_t high_wm_count; > > + odp_atomic_u64_t low_wm_count; > > + size_t seg_size; > > + size_t high_wm; > > + size_t low_wm; > > + size_t headroom; > > + size_t tailroom; > > }; > > > > +typedef union pool_entry_u { > > + struct pool_entry_s s; > > + > > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct > pool_entry_s))]; > > +} pool_entry_t; > > > > extern void *pool_entry_ptr[]; > > > > +#if defined(ODP_CONFIG_SECURE_POOLS) && > (ODP_CONFIG_SECURE_POOLS == 1) > > +#define buffer_is_secure(buf) (buf->flags.zeroized) > > +#define pool_is_secure(pool) (pool->flags.zeroized) > > +#else > > +#define buffer_is_secure(buf) 0 > > +#define pool_is_secure(pool) 0 > > +#endif > > + > > +#define TAG_ALIGN ((size_t)16) > > > > -static inline void *get_pool_entry(uint32_t pool_id) > > +#define odp_cs(ptr, old, new) \ > > + _odp_atomic_ptr_cmp_xchg_strong(&ptr, (void **)&old, (void > *)new, \ > > + _ODP_MEMMODEL_SC, \ > > + _ODP_MEMMODEL_SC) > > + > > +/* Helper functions for pointer tagging to avoid ABA race > conditions */ > > +#define odp_tag(ptr) \ > > + (((size_t)ptr) & (TAG_ALIGN - 1)) > > + > > +#define odp_detag(ptr) \ > > + ((typeof(ptr))(((size_t)ptr) & -TAG_ALIGN)) > > + > > +#define odp_retag(ptr, tag) \ > > + ((typeof(ptr))(((size_t)ptr) | odp_tag(tag))) > > + > > + > > +static inline void *get_blk(struct pool_entry_s *pool) > > { > > - return pool_entry_ptr[pool_id]; > > + void *oldhead, *myhead, *newhead; > > + > > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, > _ODP_MEMMODEL_ACQ); > > + > > + do { > > + size_t tag = odp_tag(oldhead); > > + myhead = odp_detag(oldhead); > > + if (myhead == NULL) > > + break; > > + newhead = odp_retag(((odp_buf_blk_t > *)myhead)->next, tag + 1); > > + } while (odp_cs(pool->blk_freelist, oldhead, newhead) == 0); > > + > > + if (myhead == NULL) { > > + odp_atomic_inc_u64(&pool->blkempty); > > + } else { > > + uint64_t blkcount = > > + odp_atomic_fetch_sub_u32(&pool->blkcount, 1); > > + > > + /* Check for low watermark condition */ > > + if (blkcount == pool->low_wm) { > > + LOCK(&pool->lock); > > + if (blkcount <= pool->low_wm && > > + !pool->flags.low_wm_assert) { > > + pool->flags.low_wm_assert = 1; > > + odp_atomic_inc_u64(&pool->low_wm_count); > > + } > > + UNLOCK(&pool->lock); > > + } > > + odp_atomic_inc_u64(&pool->blkallocs); > > + } > > + > > + return (void *)myhead; > > } > > > > +static inline void ret_blk(struct pool_entry_s *pool, void *block) > > +{ > > + void *oldhead, *myhead, *myblock; > > + > > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, > _ODP_MEMMODEL_ACQ); > > > > -static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > > + do { > > + size_t tag = odp_tag(oldhead); > > + myhead = odp_detag(oldhead); > > + ((odp_buf_blk_t *)block)->next = myhead; > > + myblock = odp_retag(block, tag + 1); > > + } while (odp_cs(pool->blk_freelist, oldhead, myblock) == 0); > > + > > + odp_atomic_inc_u64(&pool->blkfrees); > > + uint64_t blkcount = > odp_atomic_fetch_add_u32(&pool->blkcount, 1); > > + > > + /* Check if low watermark condition should be deasserted */ > > + if (blkcount == pool->high_wm) { > > + LOCK(&pool->lock); > > + if (blkcount == pool->high_wm && > pool->flags.low_wm_assert) { > > + pool->flags.low_wm_assert = 0; > > + odp_atomic_inc_u64(&pool->high_wm_count); > > + } > > + UNLOCK(&pool->lock); > > + } > > +} > > + > > +static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool) > > { > > - odp_buffer_bits_t handle; > > - uint32_t pool_id; > > - uint32_t index; > > - struct pool_entry_s *pool; > > - odp_buffer_hdr_t *hdr; > > - > > - handle.u32 = buf; > > - pool_id = handle.pool_id; > > - index = handle.index; > > - > > -#ifdef POOL_ERROR_CHECK > > - if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { > > - ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); > > - return NULL; > > + odp_buffer_hdr_t *oldhead, *myhead, *newhead; > > + > > + oldhead = _odp_atomic_ptr_load(&pool->buf_freelist, > _ODP_MEMMODEL_ACQ); > > + > > + do { > > + size_t tag = odp_tag(oldhead); > > + myhead = odp_detag(oldhead); > > + if (myhead == NULL) > > + break; > > + newhead = odp_retag(myhead->next, tag + 1); > > + } while (odp_cs(pool->buf_freelist, oldhead, newhead) == 0); > > + > > + if (myhead != NULL) { > > + myhead->next = myhead; > > + myhead->allocator = odp_thread_id(); > > + odp_atomic_inc_u32(&pool->bufcount); > > + odp_atomic_inc_u64(&pool->bufallocs); > > + } else { > > + odp_atomic_inc_u64(&pool->bufempty); > > } > > -#endif > > > > - pool = get_pool_entry(pool_id); > > + return (void *)myhead; > > +} > > + > > +static inline void ret_buf(struct pool_entry_s *pool, > odp_buffer_hdr_t *buf) > > +{ > > + odp_buffer_hdr_t *oldhead, *myhead, *mybuf; > > > > -#ifdef POOL_ERROR_CHECK > > - if (odp_unlikely(index > pool->num_bufs - 1)) { > > - ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); > > - return NULL; > > + if (!buf->flags.hdrdata && buf->type != ODP_BUFFER_TYPE_RAW) { > > + while (buf->segcount > 0) { > > + if (buffer_is_secure(buf) || > pool_is_secure(pool)) > > + memset(buf->addr[buf->segcount - 1], > > + 0, buf->segsize); > > + ret_blk(pool, buf->addr[--buf->segcount]); > > + } > > + buf->size = 0; > > } > > -#endif > > > > - hdr = (odp_buffer_hdr_t *)(pool->buf_base + index * > pool->buf_size); > > + oldhead = _odp_atomic_ptr_load(&pool->buf_freelist, > _ODP_MEMMODEL_ACQ); > > > > - return hdr; > > + do { > > + size_t tag = odp_tag(oldhead); > > + myhead = odp_detag(oldhead); > > + buf->next = myhead; > > + mybuf = odp_retag(buf, tag + 1); > > + } while (odp_cs(pool->buf_freelist, oldhead, mybuf) == 0); > > + > > + odp_atomic_dec_u32(&pool->bufcount); > > + odp_atomic_inc_u64(&pool->buffrees); > > +} > > + > > +static inline odp_buffer_pool_t pool_index_to_handle(uint32_t > pool_id) > > +{ > > + return pool_id + 1; > > } > > > > +static inline uint32_t pool_handle_to_index(odp_buffer_pool_t > pool_hdl) > > +{ > > + return pool_hdl - 1; > > +} > > + > > +static inline void *get_pool_entry(uint32_t pool_id) > > +{ > > + return pool_entry_ptr[pool_id]; > > +} > > + > > +static inline pool_entry_t *odp_pool_to_entry(odp_buffer_pool_t > pool) > > +{ > > + return (pool_entry_t > *)get_pool_entry(pool_handle_to_index(pool)); > > +} > > + > > +static inline pool_entry_t *odp_buf_to_pool(odp_buffer_hdr_t *buf) > > +{ > > + return odp_pool_to_entry(buf->pool_hdl); > > +} > > + > > +static inline size_t > odp_buffer_pool_segment_size(odp_buffer_pool_t pool) > > +{ > > + return odp_pool_to_entry(pool)->s.seg_size; > > +} > > > > #ifdef __cplusplus > > } > > diff --git > a/platform/linux-generic/include/odp_packet_internal.h > b/platform/linux-generic/include/odp_packet_internal.h > > index 49c59b2..f34a83d 100644 > > --- a/platform/linux-generic/include/odp_packet_internal.h > > +++ b/platform/linux-generic/include/odp_packet_internal.h > > @@ -22,6 +22,7 @@ extern "C" { > > #include <odp_debug.h> > > #include <odp_buffer_internal.h> > > #include <odp_buffer_pool_internal.h> > > +#include <odp_buffer_inlines.h> > > #include <odp_packet.h> > > #include <odp_packet_io.h> > > > > @@ -92,7 +93,8 @@ typedef union { > > }; > > } output_flags_t; > > > > -ODP_STATIC_ASSERT(sizeof(output_flags_t) == sizeof(uint32_t), > "OUTPUT_FLAGS_SIZE_ERROR"); > > +ODP_STATIC_ASSERT(sizeof(output_flags_t) == sizeof(uint32_t), > > + "OUTPUT_FLAGS_SIZE_ERROR"); > > > > /** > > * Internal Packet header > > @@ -105,25 +107,23 @@ typedef struct { > > error_flags_t error_flags; > > output_flags_t output_flags; > > > > - uint32_t frame_offset; /**< offset to start of frame, even > on error */ > > uint32_t l2_offset; /**< offset to L2 hdr, e.g. Eth */ > > uint32_t l3_offset; /**< offset to L3 hdr, e.g. IPv4, IPv6 */ > > uint32_t l4_offset; /**< offset to L4 hdr (TCP, UDP, SCTP, > also ICMP) */ > > > > uint32_t frame_len; > > + uint32_t headroom; > > + uint32_t tailroom; > > > > uint64_t user_ctx; /* user context */ > > > > odp_pktio_t input; > > - > > - uint32_t pad; > > - uint8_t buf_data[]; /* start of buffer data area */ > > } odp_packet_hdr_t; > > > > -ODP_STATIC_ASSERT(sizeof(odp_packet_hdr_t) == > ODP_OFFSETOF(odp_packet_hdr_t, buf_data), > > - "ODP_PACKET_HDR_T__SIZE_ERR"); > > -ODP_STATIC_ASSERT(sizeof(odp_packet_hdr_t) % sizeof(uint64_t) == 0, > > - "ODP_PACKET_HDR_T__SIZE_ERR2"); > > +typedef struct odp_packet_hdr_stride { > > + uint8_t > pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_packet_hdr_t))]; > > +} odp_packet_hdr_stride; > > + > > > > /** > > * Return the packet header > > @@ -138,6 +138,38 @@ static inline odp_packet_hdr_t > *odp_packet_hdr(odp_packet_t pkt) > > */ > > void odp_packet_parse(odp_packet_t pkt, size_t len, size_t > l2_offset); > > > > +/** > > + * Initialize packet buffer > > + */ > > +static inline void packet_init(pool_entry_t *pool, > > + odp_packet_hdr_t *pkt_hdr, > > + size_t size) > > +{ > > + /* > > + * Reset parser metadata. Note that we clear via memset to > make > > + * this routine indepenent of any additional adds to packet > metadata. > > + */ > > + const size_t start_offset = > ODP_FIELD_SIZEOF(odp_packet_hdr_t, buf_hdr); > > + uint8_t *start; > > + size_t len; > > + > > + start = (uint8_t *)pkt_hdr + start_offset; > > + len = sizeof(odp_packet_hdr_t) - start_offset; > > + memset(start, 0, len); > > + > > + /* > > + * Packet headroom is set from the pool's headroom > > + * Packet tailroom is rounded up to fill the last > > + * segment occupied by the allocated length. > > + */ > > + pkt_hdr->frame_len = size; > > + pkt_hdr->headroom = pool->s.headroom; > > + pkt_hdr->tailroom = > > + (pool->s.seg_size * pkt_hdr->buf_hdr.segcount) - > > + (pool->s.headroom + size); > > +} > > + > > + > > #ifdef __cplusplus > > } > > #endif > > diff --git a/platform/linux-generic/include/odp_timer_internal.h > b/platform/linux-generic/include/odp_timer_internal.h > > index ad28f53..2ff36ce 100644 > > --- a/platform/linux-generic/include/odp_timer_internal.h > > +++ b/platform/linux-generic/include/odp_timer_internal.h > > @@ -51,14 +51,9 @@ typedef struct odp_timeout_hdr_t { > > uint8_t buf_data[]; > > } odp_timeout_hdr_t; > > > > - > > - > > -ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) == > > - ODP_OFFSETOF(odp_timeout_hdr_t, buf_data), > > - "ODP_TIMEOUT_HDR_T__SIZE_ERR"); > > - > > -ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) % sizeof(uint64_t) > == 0, > > - "ODP_TIMEOUT_HDR_T__SIZE_ERR2"); > > +typedef struct odp_timeout_hdr_stride { > > + uint8_t > pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_timeout_hdr_t))]; > > +} odp_timeout_hdr_stride; > > > > > > /** > > diff --git a/platform/linux-generic/odp_buffer.c > b/platform/linux-generic/odp_buffer.c > > index bcbb99a..366190c 100644 > > --- a/platform/linux-generic/odp_buffer.c > > +++ b/platform/linux-generic/odp_buffer.c > > @@ -5,8 +5,9 @@ > > */ > > > > #include <odp_buffer.h> > > -#include <odp_buffer_internal.h> > > #include <odp_buffer_pool_internal.h> > > +#include <odp_buffer_internal.h> > > +#include <odp_buffer_inlines.h> > > > > #include <string.h> > > #include <stdio.h> > > @@ -16,7 +17,7 @@ void *odp_buffer_addr(odp_buffer_t buf) > > { > > odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf); > > > > - return hdr->addr; > > + return hdr->addr[0]; > > } > > > > > > @@ -38,11 +39,7 @@ int odp_buffer_type(odp_buffer_t buf) > > > > int odp_buffer_is_valid(odp_buffer_t buf) > > { > > - odp_buffer_bits_t handle; > > - > > - handle.u32 = buf; > > - > > - return (handle.index != ODP_BUFFER_INVALID_INDEX); > > + return validate_buf(buf) != NULL; > > } > > > > > > @@ -63,28 +60,14 @@ int odp_buffer_snprint(char *str, size_t n, > odp_buffer_t buf) > > len += snprintf(&str[len], n-len, > > " pool %i\n", hdr->pool_hdl); > > len += snprintf(&str[len], n-len, > > - " index %"PRIu32"\n", hdr->index); > > - len += snprintf(&str[len], n-len, > > - " phy_addr %"PRIu64"\n", hdr->phys_addr); > > - len += snprintf(&str[len], n-len, > > " addr %p\n", hdr->addr); > > len += snprintf(&str[len], n-len, > > " size %zu\n", hdr->size); > > len += snprintf(&str[len], n-len, > > - " cur_offset %zu\n", hdr->cur_offset); > > - len += snprintf(&str[len], n-len, > > " ref_count %i\n", > > odp_atomic_load_u32(&hdr->ref_count)); > > len += snprintf(&str[len], n-len, > > " type %i\n", hdr->type); > > - len += snprintf(&str[len], n-len, > > - " Scatter list\n"); > > - len += snprintf(&str[len], n-len, > > - " num_bufs %i\n", hdr->scatter.num_bufs); > > - len += snprintf(&str[len], n-len, > > - " pos %i\n", hdr->scatter.pos); > > - len += snprintf(&str[len], n-len, > > - " total_len %zu\n", > hdr->scatter.total_len); > > > > return len; > > } > > @@ -101,9 +84,3 @@ void odp_buffer_print(odp_buffer_t buf) > > > > ODP_PRINT("\n%s\n", str); > > } > > - > > -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t > buf_src) > > -{ > > - (void)buf_dst; > > - (void)buf_src; > > -} > > diff --git a/platform/linux-generic/odp_buffer_pool.c > b/platform/linux-generic/odp_buffer_pool.c > > index 6a0a6b2..f545090 100644 > > --- a/platform/linux-generic/odp_buffer_pool.c > > +++ b/platform/linux-generic/odp_buffer_pool.c > > @@ -6,8 +6,9 @@ > > > > #include <odp_std_types.h> > > #include <odp_buffer_pool.h> > > -#include <odp_buffer_pool_internal.h> > > #include <odp_buffer_internal.h> > > +#include <odp_buffer_pool_internal.h> > > +#include <odp_buffer_inlines.h> > > #include <odp_packet_internal.h> > > #include <odp_timer_internal.h> > > #include <odp_shared_memory.h> > > @@ -16,57 +17,35 @@ > > #include <odp_config.h> > > #include <odp_hints.h> > > #include <odp_debug.h> > > +#include <odp_atomic_internal.h> > > > > #include <string.h> > > #include <stdlib.h> > > > > > > -#ifdef POOL_USE_TICKETLOCK > > -#include <odp_ticketlock.h> > > -#define LOCK(a) odp_ticketlock_lock(a) > > -#define UNLOCK(a) odp_ticketlock_unlock(a) > > -#define LOCK_INIT(a) odp_ticketlock_init(a) > > -#else > > -#include <odp_spinlock.h> > > -#define LOCK(a) odp_spinlock_lock(a) > > -#define UNLOCK(a) odp_spinlock_unlock(a) > > -#define LOCK_INIT(a) odp_spinlock_init(a) > > -#endif > > - > > - > > #if ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS > > #error ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS > > #endif > > > > -#define NULL_INDEX ((uint32_t)-1) > > > > -union buffer_type_any_u { > > +typedef union buffer_type_any_u { > > odp_buffer_hdr_t buf; > > odp_packet_hdr_t pkt; > > odp_timeout_hdr_t tmo; > > -}; > > - > > -ODP_STATIC_ASSERT((sizeof(union buffer_type_any_u) % 8) == 0, > > - "BUFFER_TYPE_ANY_U__SIZE_ERR"); > > +} odp_anybuf_t; > > > > /* Any buffer type header */ > > typedef struct { > > union buffer_type_any_u any_hdr; /* any buffer type */ > > - uint8_t buf_data[]; /* start of buffer > data area */ > > } odp_any_buffer_hdr_t; > > > > - > > -typedef union pool_entry_u { > > - struct pool_entry_s s; > > - > > - uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct > pool_entry_s))]; > > - > > -} pool_entry_t; > > +typedef struct odp_any_hdr_stride { > > + uint8_t > pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_any_buffer_hdr_t))]; > > +} odp_any_hdr_stride; > > > > > > typedef struct pool_table_t { > > pool_entry_t pool[ODP_CONFIG_BUFFER_POOLS]; > > - > > } pool_table_t; > > > > > > @@ -77,38 +56,6 @@ static pool_table_t *pool_tbl; > > void *pool_entry_ptr[ODP_CONFIG_BUFFER_POOLS]; > > > > > > -static __thread odp_buffer_chunk_hdr_t > *local_chunk[ODP_CONFIG_BUFFER_POOLS]; > > - > > - > > -static inline odp_buffer_pool_t pool_index_to_handle(uint32_t > pool_id) > > -{ > > - return pool_id + 1; > > -} > > - > > - > > -static inline uint32_t pool_handle_to_index(odp_buffer_pool_t > pool_hdl) > > -{ > > - return pool_hdl -1; > > -} > > - > > - > > -static inline void set_handle(odp_buffer_hdr_t *hdr, > > - pool_entry_t *pool, uint32_t index) > > -{ > > - odp_buffer_pool_t pool_hdl = pool->s.pool_hdl; > > - uint32_t pool_id = pool_handle_to_index(pool_hdl); > > - > > - if (pool_id >= ODP_CONFIG_BUFFER_POOLS) > > - ODP_ABORT("set_handle: Bad pool handle %u\n", > pool_hdl); > > - > > - if (index > ODP_BUFFER_MAX_INDEX) > > - ODP_ERR("set_handle: Bad buffer index\n"); > > - > > - hdr->handle.pool_id = pool_id; > > - hdr->handle.index = index; > > -} > > - > > - > > int odp_buffer_pool_init_global(void) > > { > > uint32_t i; > > @@ -142,269 +89,244 @@ int odp_buffer_pool_init_global(void) > > return 0; > > } > > > > +/** > > + * Buffer pool creation > > + */ > > > > -static odp_buffer_hdr_t *index_to_hdr(pool_entry_t *pool, > uint32_t index) > > -{ > > - odp_buffer_hdr_t *hdr; > > - > > - hdr = (odp_buffer_hdr_t *)(pool->s.buf_base + index * > pool->s.buf_size); > > - return hdr; > > -} > > - > > - > > -static void add_buf_index(odp_buffer_chunk_hdr_t *chunk_hdr, > uint32_t index) > > -{ > > - uint32_t i = chunk_hdr->chunk.num_bufs; > > - chunk_hdr->chunk.buf_index[i] = index; > > - chunk_hdr->chunk.num_bufs++; > > -} > > - > > - > > -static uint32_t rem_buf_index(odp_buffer_chunk_hdr_t *chunk_hdr) > > +odp_buffer_pool_t odp_buffer_pool_create(const char *name, > > + odp_shm_t shm, > > + odp_buffer_pool_param_t *params) > > { > > - uint32_t index; > > + odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID; > > + pool_entry_t *pool; > > uint32_t i; > > > > - i = chunk_hdr->chunk.num_bufs - 1; > > - index = chunk_hdr->chunk.buf_index[i]; > > - chunk_hdr->chunk.num_bufs--; > > - return index; > > -} > > - > > - > > -static odp_buffer_chunk_hdr_t *next_chunk(pool_entry_t *pool, > > - odp_buffer_chunk_hdr_t *chunk_hdr) > > -{ > > - uint32_t index; > > - > > - index = chunk_hdr->chunk.buf_index[ODP_BUFS_PER_CHUNK-1]; > > - if (index == NULL_INDEX) > > - return NULL; > > - else > > - return (odp_buffer_chunk_hdr_t > *)index_to_hdr(pool, index); > > -} > > - > > - > > -static odp_buffer_chunk_hdr_t *rem_chunk(pool_entry_t *pool) > > -{ > > - odp_buffer_chunk_hdr_t *chunk_hdr; > > - > > - chunk_hdr = pool->s.head; > > - if (chunk_hdr == NULL) { > > - /* Pool is empty */ > > - return NULL; > > - } > > - > > - pool->s.head = next_chunk(pool, chunk_hdr); > > - pool->s.free_bufs -= ODP_BUFS_PER_CHUNK; > > + /* Default initialization paramters */ > > + static _odp_buffer_pool_init_t default_init_params = { > > + .udata_size = 0, > > + .buf_init = NULL, > > + .buf_init_arg = NULL, > > + }; > > > > - /* unlink */ > > - rem_buf_index(chunk_hdr); > > - return chunk_hdr; > > -} > > + _odp_buffer_pool_init_t *init_params = &default_init_params; > > > > + if (params == NULL) > > + return ODP_BUFFER_POOL_INVALID; > > > > -static void add_chunk(pool_entry_t *pool, > odp_buffer_chunk_hdr_t *chunk_hdr) > > -{ > > - if (pool->s.head) /* link pool head to the chunk */ > > - add_buf_index(chunk_hdr, pool->s.head->buf_hdr.index); > > - else > > - add_buf_index(chunk_hdr, NULL_INDEX); > > + /* Restriction for v1.0: All buffers are unsegmented */ > > + const int unsegmented = 1; > > > > - pool->s.head = chunk_hdr; > > - pool->s.free_bufs += ODP_BUFS_PER_CHUNK; > > -} > > + /* Restriction for v1.0: No zeroization support */ > > + const int zeroized = 0; > > > > + /* Restriction for v1.0: No udata support */ > > + uint32_t udata_stride = (init_params->udata_size > > sizeof(void *)) ? > > + ODP_CACHE_LINE_SIZE_ROUNDUP(init_params->udata_size) : > > + 0; > > > > -static void check_align(pool_entry_t *pool, odp_buffer_hdr_t *hdr) > > -{ > > - if (!ODP_ALIGNED_CHECK_POWER_2(hdr->addr, > pool->s.user_align)) { > > - ODP_ABORT("check_align: user data align error %p, > align %zu\n", > > - hdr->addr, pool->s.user_align); > > - } > > - > > - if (!ODP_ALIGNED_CHECK_POWER_2(hdr, ODP_CACHE_LINE_SIZE)) { > > - ODP_ABORT("check_align: hdr align error %p, align > %i\n", > > - hdr, ODP_CACHE_LINE_SIZE); > > - } > > -} > > - > > + uint32_t blk_size, buf_stride; > > > > -static void fill_hdr(void *ptr, pool_entry_t *pool, uint32_t index, > > - int buf_type) > > -{ > > - odp_buffer_hdr_t *hdr = (odp_buffer_hdr_t *)ptr; > > - size_t size = pool->s.hdr_size; > > - uint8_t *buf_data; > > - > > - if (buf_type == ODP_BUFFER_TYPE_CHUNK) > > - size = sizeof(odp_buffer_chunk_hdr_t); > > + switch (params->buf_type) { > > + case ODP_BUFFER_TYPE_RAW: > > + blk_size = params->buf_size; > > > > - switch (pool->s.buf_type) { > > - odp_raw_buffer_hdr_t *raw_hdr; > > - odp_packet_hdr_t *packet_hdr; > > - odp_timeout_hdr_t *tmo_hdr; > > - odp_any_buffer_hdr_t *any_hdr; > > + /* Optimize small raw buffers */ > > + if (blk_size > ODP_MAX_INLINE_BUF) > > + blk_size = ODP_ALIGN_ROUNDUP(blk_size, > TAG_ALIGN); > > > > - case ODP_BUFFER_TYPE_RAW: > > - raw_hdr = ptr; > > - buf_data = raw_hdr->buf_data; > > + buf_stride = sizeof(odp_buffer_hdr_stride); > > break; > > + > > case ODP_BUFFER_TYPE_PACKET: > > - packet_hdr = ptr; > > - buf_data = packet_hdr->buf_data; > > + if (unsegmented) > > + blk_size = > > + ODP_CACHE_LINE_SIZE_ROUNDUP(params->buf_size); > > + else > > + blk_size = ODP_ALIGN_ROUNDUP(params->buf_size, > > + ODP_CONFIG_BUF_SEG_SIZE); > > + buf_stride = sizeof(odp_packet_hdr_stride); > > break; > > + > > case ODP_BUFFER_TYPE_TIMEOUT: > > - tmo_hdr = ptr; > > - buf_data = tmo_hdr->buf_data; > > + blk_size = 0; /* Timeouts have no block data, only > metadata */ > > + buf_stride = sizeof(odp_timeout_hdr_stride); > > break; > > + > > case ODP_BUFFER_TYPE_ANY: > > - any_hdr = ptr; > > - buf_data = any_hdr->buf_data; > > + if (unsegmented) > > + blk_size = > > + ODP_CACHE_LINE_SIZE_ROUNDUP(params->buf_size); > > + else > > + blk_size = ODP_ALIGN_ROUNDUP(params->buf_size, > > + ODP_CONFIG_BUF_SEG_SIZE); > > + buf_stride = sizeof(odp_any_hdr_stride); > > break; > > - default: > > - ODP_ABORT("Bad buffer type\n"); > > - } > > - > > - memset(hdr, 0, size); > > - > > - set_handle(hdr, pool, index); > > - > > - hdr->addr = &buf_data[pool->s.buf_offset - > pool->s.hdr_size]; > > - hdr->index = index; > > - hdr->size = pool->s.user_size; > > - hdr->pool_hdl = pool->s.pool_hdl; > > - hdr->type = buf_type; > > - > > - check_align(pool, hdr); > > -} > > - > > - > > -static void link_bufs(pool_entry_t *pool) > > -{ > > - odp_buffer_chunk_hdr_t *chunk_hdr; > > - size_t hdr_size; > > - size_t data_size; > > - size_t data_align; > > - size_t tot_size; > > - size_t offset; > > - size_t min_size; > > - uint64_t pool_size; > > - uintptr_t buf_base; > > - uint32_t index; > > - uintptr_t pool_base; > > - int buf_type; > > - > > - buf_type = pool->s.buf_type; > > - data_size = pool->s.user_size; > > - data_align = pool->s.user_align; > > - pool_size = pool->s.pool_size; > > - pool_base = (uintptr_t) pool->s.pool_base_addr; > > - > > - if (buf_type == ODP_BUFFER_TYPE_RAW) { > > - hdr_size = sizeof(odp_raw_buffer_hdr_t); > > - } else if (buf_type == ODP_BUFFER_TYPE_PACKET) { > > - hdr_size = sizeof(odp_packet_hdr_t); > > - } else if (buf_type == ODP_BUFFER_TYPE_TIMEOUT) { > > - hdr_size = sizeof(odp_timeout_hdr_t); > > - } else if (buf_type == ODP_BUFFER_TYPE_ANY) { > > - hdr_size = sizeof(odp_any_buffer_hdr_t); > > - } else > > - ODP_ABORT("odp_buffer_pool_create: Bad type %i\n", > buf_type); > > - > > - > > - /* Chunk must fit into buffer data area.*/ > > - min_size = sizeof(odp_buffer_chunk_hdr_t) - hdr_size; > > - if (data_size < min_size) > > - data_size = min_size; > > - > > - /* Roundup data size to full cachelines */ > > - data_size = ODP_CACHE_LINE_SIZE_ROUNDUP(data_size); > > - > > - /* Min cacheline alignment for buffer header and data */ > > - data_align = ODP_CACHE_LINE_SIZE_ROUNDUP(data_align); > > - offset = ODP_CACHE_LINE_SIZE_ROUNDUP(hdr_size); > > - > > - /* Multiples of cacheline size */ > > - if (data_size > data_align) > > - tot_size = data_size + offset; > > - else > > - tot_size = data_align + offset; > > - > > - /* First buffer */ > > - buf_base = ODP_ALIGN_ROUNDUP(pool_base + offset, > data_align) - offset; > > - > > - pool->s.hdr_size = hdr_size; > > - pool->s.buf_base = buf_base; > > - pool->s.buf_size = tot_size; > > - pool->s.buf_offset = offset; > > - index = 0; > > - > > - chunk_hdr = (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, > index); > > - pool->s.head = NULL; > > - pool_size -= buf_base - pool_base; > > - > > - while (pool_size > ODP_BUFS_PER_CHUNK * tot_size) { > > - int i; > > - > > - fill_hdr(chunk_hdr, pool, index, > ODP_BUFFER_TYPE_CHUNK); > > - > > - index++; > > - > > - for (i = 0; i < ODP_BUFS_PER_CHUNK - 1; i++) { > > - odp_buffer_hdr_t *hdr = index_to_hdr(pool, > index); > > - > > - fill_hdr(hdr, pool, index, buf_type); > > - > > - add_buf_index(chunk_hdr, index); > > - index++; > > - } > > - > > - add_chunk(pool, chunk_hdr); > > > > - chunk_hdr = (odp_buffer_chunk_hdr_t > *)index_to_hdr(pool, > > - index); > > - pool->s.num_bufs += ODP_BUFS_PER_CHUNK; > > - pool_size -= ODP_BUFS_PER_CHUNK * tot_size; > > + default: > > + return ODP_BUFFER_POOL_INVALID; > > } > > -} > > - > > - > > -odp_buffer_pool_t odp_buffer_pool_create(const char *name, > > - void *base_addr, uint64_t > size, > > - size_t buf_size, size_t > buf_align, > > - int buf_type) > > -{ > > - odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID; > > - pool_entry_t *pool; > > - uint32_t i; > > > > + /* Find an unused buffer pool slot and iniitalize it as > requested */ > > for (i = 0; i < ODP_CONFIG_BUFFER_POOLS; i++) { > > pool = get_pool_entry(i); > > > > LOCK(&pool->s.lock); > > + if (pool->s.pool_shm != ODP_SHM_INVALID) { > > + UNLOCK(&pool->s.lock); > > + continue; > > + } > > + > > + /* found free pool */ > > + size_t block_size, mdata_size, udata_size; > > > > - if (pool->s.buf_base == 0) { > > - /* found free pool */ > > + pool->s.flags.all = 0; > > > > + if (name == NULL) { > > + pool->s.name <http://s.name>[0] = 0; > > + } else { > > strncpy(pool->s.name <http://s.name>, name, > > ODP_BUFFER_POOL_NAME_LEN - 1); > > pool->s.name > <http://s.name>[ODP_BUFFER_POOL_NAME_LEN - 1] = 0; > > - pool->s.pool_base_addr = base_addr; > > - pool->s.pool_size = size; > > - pool->s.user_size = buf_size; > > - pool->s.user_align = buf_align; > > - pool->s.buf_type = buf_type; > > - > > - link_bufs(pool); > > - > > - UNLOCK(&pool->s.lock); > > + pool->s.flags.has_name = 1; > > + } > > > > - pool_hdl = pool->s.pool_hdl; > > - break; > > + pool->s.params = *params; > > + pool->s.init_params = *init_params; > > + > > + mdata_size = params->num_bufs * buf_stride; > > + udata_size = params->num_bufs * udata_stride; > > + > > + /* Optimize for short buffers: Data stored in > buffer hdr */ > > + if (blk_size <= ODP_MAX_INLINE_BUF) > > + block_size = 0; > > + else > > + block_size = params->num_bufs * blk_size; > > + > > + pool->s.pool_size = ODP_PAGE_SIZE_ROUNDUP(mdata_size + > > + udata_size + > > + block_size); > > + > > + if (shm == ODP_SHM_NULL) { > > + shm = odp_shm_reserve(pool->s.name > <http://s.name>, > > + pool->s.pool_size, > > + ODP_PAGE_SIZE, 0); > > + if (shm == ODP_SHM_INVALID) { > > + UNLOCK(&pool->s.lock); > > + return ODP_BUFFER_INVALID; > > + } > > + pool->s.pool_base_addr = odp_shm_addr(shm); > > + } else { > > + odp_shm_info_t info; > > + if (odp_shm_info(shm, &info) != 0 || > > + info.size < pool->s.pool_size) { > > + UNLOCK(&pool->s.lock); > > + return ODP_BUFFER_POOL_INVALID; > > + } > > + pool->s.pool_base_addr = odp_shm_addr(shm); > > + void *page_addr = > > + ODP_ALIGN_ROUNDUP_PTR(pool->s.pool_base_addr, > > + ODP_PAGE_SIZE); > > + if (pool->s.pool_base_addr != page_addr) { > > + if (info.size < pool->s.pool_size + > > + ((size_t)page_addr - > > + (size_t)pool->s.pool_base_addr)) { > > + UNLOCK(&pool->s.lock); > > + return > ODP_BUFFER_POOL_INVALID; > > + } > > + pool->s.pool_base_addr = page_addr; > > + } > > + pool->s.flags.user_supplied_shm = 1; > > } > > > > + pool->s.pool_shm = shm; > > + > > + /* Now safe to unlock since pool entry has been > allocated */ > > UNLOCK(&pool->s.lock); > > + > > + pool->s.flags.unsegmented = unsegmented; > > + pool->s.flags.zeroized = zeroized; > > + pool->s.seg_size = unsegmented ? > > + blk_size : ODP_CONFIG_BUF_SEG_SIZE; > > + > > + uint8_t *udata_base_addr = pool->s.pool_base_addr > + mdata_size; > > + uint8_t *block_base_addr = udata_base_addr + > udata_size; > > + > > + /* bufcount will decrement down to 0 as we > populate freelist */ > > + odp_atomic_store_u32(&pool->s.bufcount, params->num_bufs); > > + pool->s.buf_stride = buf_stride; > > + pool->s.high_wm = 0; > > + pool->s.low_wm = 0; > > + pool->s.headroom = 0; > > + pool->s.tailroom = 0; > > + _odp_atomic_ptr_store(&pool->s.buf_freelist, NULL, > > + _ODP_MEMMODEL_RLX); > > + _odp_atomic_ptr_store(&pool->s.blk_freelist, NULL, > > + _ODP_MEMMODEL_RLX); > > + > > + uint8_t *buf = udata_base_addr - buf_stride; > > + uint8_t *udat = udata_stride == 0 ? NULL : > > + block_base_addr - udata_stride; > > + > > + /* Init buffer common header and add to pool > buffer freelist */ > > + do { > > + odp_buffer_hdr_t *tmp = > > + (odp_buffer_hdr_t *)(void *)buf; > > + > > + /* Iniitalize buffer metadata */ > > + tmp->allocator = ODP_CONFIG_MAX_THREADS; > > + tmp->flags.all = 0; > > + tmp->flags.zeroized = zeroized; > > + tmp->size = 0; > > + odp_atomic_store_u32(&tmp->ref_count, 0); > > + tmp->type = params->buf_type; > > + tmp->pool_hdl = pool->s.pool_hdl; > > + tmp->udata_addr = (void *)udat; > > + tmp->udata_size = init_params->udata_size; > > + tmp->segcount = 0; > > + tmp->segsize = pool->s.seg_size; > > + tmp->handle.handle = > odp_buffer_encode_handle(tmp); > > + > > + /* Set 1st seg addr for zero-len buffers */ > > + tmp->addr[0] = NULL; > > + > > + /* Special case for short buffer data */ > > + if (blk_size <= ODP_MAX_INLINE_BUF) { > > + tmp->flags.hdrdata = 1; > > + if (blk_size > 0) { > > + tmp->segcount = 1; > > + tmp->addr[0] = &tmp->addr[1]; > > + tmp->size = blk_size; > > + } > > + } > > + > > + /* Push buffer onto pool's freelist */ > > + ret_buf(&pool->s, tmp); > > + buf -= buf_stride; > > + udat -= udata_stride; > > + } while (buf >= pool->s.pool_base_addr); > > + > > + /* Form block freelist for pool */ > > + uint8_t *blk = pool->s.pool_base_addr + > pool->s.pool_size - > > + pool->s.seg_size; > > + > > + if (blk_size > ODP_MAX_INLINE_BUF) > > + do { > > + ret_blk(&pool->s, blk); > > + blk -= pool->s.seg_size; > > + } while (blk >= block_base_addr); > > + > > + /* Initialize pool statistics counters */ > > + odp_atomic_store_u64(&pool->s.bufallocs, 0); > > + odp_atomic_store_u64(&pool->s.buffrees, 0); > > + odp_atomic_store_u64(&pool->s.blkallocs, 0); > > + odp_atomic_store_u64(&pool->s.blkfrees, 0); > > + odp_atomic_store_u64(&pool->s.bufempty, 0); > > + odp_atomic_store_u64(&pool->s.blkempty, 0); > > + odp_atomic_store_u64(&pool->s.high_wm_count, 0); > > + odp_atomic_store_u64(&pool->s.low_wm_count, 0); > > + > > + pool_hdl = pool->s.pool_hdl; > > + break; > > } > > > > return pool_hdl; > > @@ -431,145 +353,126 @@ odp_buffer_pool_t > odp_buffer_pool_lookup(const char *name) > > return ODP_BUFFER_POOL_INVALID; > > } > > > > - > > -odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl) > > +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool_hdl, size_t size) > > { > > - pool_entry_t *pool; > > - odp_buffer_chunk_hdr_t *chunk; > > - odp_buffer_bits_t handle; > > - uint32_t pool_id = pool_handle_to_index(pool_hdl); > > - > > - pool = get_pool_entry(pool_id); > > - chunk = local_chunk[pool_id]; > > - > > - if (chunk == NULL) { > > - LOCK(&pool->s.lock); > > - chunk = rem_chunk(pool); > > - UNLOCK(&pool->s.lock); > > - > > - if (chunk == NULL) > > - return ODP_BUFFER_INVALID; > > - > > - local_chunk[pool_id] = chunk; > > + pool_entry_t *pool = odp_pool_to_entry(pool_hdl); > > + size_t totsize = pool->s.headroom + size + pool->s.tailroom; > > + odp_anybuf_t *buf; > > + uint8_t *blk; > > + > > + if ((pool->s.flags.unsegmented && totsize > > pool->s.seg_size) || > > + (!pool->s.flags.unsegmented && totsize > > ODP_CONFIG_BUF_MAX_SIZE)) > > + return ODP_BUFFER_INVALID; > > + > > + buf = (odp_anybuf_t *)(void *)get_buf(&pool->s); > > + > > + if (buf == NULL) > > + return ODP_BUFFER_INVALID; > > + > > + /* Get blocks for this buffer, if pool uses application > data */ > > + if (buf->buf.size < totsize) { > > + size_t needed = totsize - buf->buf.size; > > + do { > > + blk = get_blk(&pool->s); > > + if (blk == NULL) { > > + ret_buf(&pool->s, &buf->buf); > > + return ODP_BUFFER_INVALID; > > + } > > + buf->buf.addr[buf->buf.segcount++] = blk; > > + needed -= pool->s.seg_size; > > + } while ((ssize_t)needed > 0); > > + buf->buf.size = buf->buf.segcount * pool->s.seg_size; > > } > > > > - if (chunk->chunk.num_bufs == 0) { > > - /* give the chunk buffer */ > > - local_chunk[pool_id] = NULL; > > - chunk->buf_hdr.type = pool->s.buf_type; > > + /* By default, buffers inherit their pool's zeroization > setting */ > > + buf->buf.flags.zeroized = pool->s.flags.zeroized; > > > > - handle = chunk->buf_hdr.handle; > > - } else { > > - odp_buffer_hdr_t *hdr; > > - uint32_t index; > > - index = rem_buf_index(chunk); > > - hdr = index_to_hdr(pool, index); > > + if (buf->buf.type == ODP_BUFFER_TYPE_PACKET) { > > + packet_init(pool, &buf->pkt, size); > > > > - handle = hdr->handle; > > + if (pool->s.init_params.buf_init != NULL) > > + (*pool->s.init_params.buf_init) > > + (buf->buf.handle.handle, > > + pool->s.init_params.buf_init_arg); > > } > > > > - return handle.u32; > > + return odp_hdr_to_buf(&buf->buf); > > } > > > > - > > -void odp_buffer_free(odp_buffer_t buf) > > +odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl) > > { > > - odp_buffer_hdr_t *hdr; > > - uint32_t pool_id; > > - pool_entry_t *pool; > > - odp_buffer_chunk_hdr_t *chunk_hdr; > > - > > - hdr = odp_buf_to_hdr(buf); > > - pool_id = pool_handle_to_index(hdr->pool_hdl); > > - pool = get_pool_entry(pool_id); > > - chunk_hdr = local_chunk[pool_id]; > > - > > - if (chunk_hdr && chunk_hdr->chunk.num_bufs == > ODP_BUFS_PER_CHUNK - 1) { > > - /* Current chunk is full. Push back to the pool */ > > - LOCK(&pool->s.lock); > > - add_chunk(pool, chunk_hdr); > > - UNLOCK(&pool->s.lock); > > - chunk_hdr = NULL; > > - } > > - > > - if (chunk_hdr == NULL) { > > - /* Use this buffer */ > > - chunk_hdr = (odp_buffer_chunk_hdr_t *)hdr; > > - local_chunk[pool_id] = chunk_hdr; > > - chunk_hdr->chunk.num_bufs = 0; > > - } else { > > - /* Add to current chunk */ > > - add_buf_index(chunk_hdr, hdr->index); > > - } > > + return buffer_alloc(pool_hdl, > > + odp_pool_to_entry(pool_hdl)->s.params.buf_size); > > } > > > > - > > -odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf) > > +void odp_buffer_free(odp_buffer_t buf) > > { > > - odp_buffer_hdr_t *hdr; > > - > > - hdr = odp_buf_to_hdr(buf); > > - return hdr->pool_hdl; > > + odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(buf); > > + pool_entry_t *pool = odp_buf_to_pool(buf_hdr); > > + ret_buf(&pool->s, buf_hdr); > > } > > > > - > > void odp_buffer_pool_print(odp_buffer_pool_t pool_hdl) > > { > > pool_entry_t *pool; > > - odp_buffer_chunk_hdr_t *chunk_hdr; > > - uint32_t i; > > uint32_t pool_id; > > > > pool_id = pool_handle_to_index(pool_hdl); > > pool = get_pool_entry(pool_id); > > > > - ODP_PRINT("Pool info\n"); > > - ODP_PRINT("---------\n"); > > - ODP_PRINT(" pool %i\n", pool->s.pool_hdl); > > - ODP_PRINT(" name %s\n", pool->s.name > <http://s.name>); > > - ODP_PRINT(" pool base %p\n", pool->s.pool_base_addr); > > - ODP_PRINT(" buf base 0x%"PRIxPTR"\n", pool->s.buf_base); > > - ODP_PRINT(" pool size 0x%"PRIx64"\n", pool->s.pool_size); > > - ODP_PRINT(" buf size %zu\n", pool->s.user_size); > > - ODP_PRINT(" buf align %zu\n", pool->s.user_align); > > - ODP_PRINT(" hdr size %zu\n", pool->s.hdr_size); > > - ODP_PRINT(" alloc size %zu\n", pool->s.buf_size); > > - ODP_PRINT(" offset to hdr %zu\n", pool->s.buf_offset); > > - ODP_PRINT(" num bufs %"PRIu64"\n", pool->s.num_bufs); > > - ODP_PRINT(" free bufs %"PRIu64"\n", pool->s.free_bufs); > > - > > - /* first chunk */ > > - chunk_hdr = pool->s.head; > > - > > - if (chunk_hdr == NULL) { > > - ODP_ERR(" POOL EMPTY\n"); > > - return; > > - } > > - > > - ODP_PRINT("\n First chunk\n"); > > - > > - for (i = 0; i < chunk_hdr->chunk.num_bufs - 1; i++) { > > - uint32_t index; > > - odp_buffer_hdr_t *hdr; > > - > > - index = chunk_hdr->chunk.buf_index[i]; > > - hdr = index_to_hdr(pool, index); > > - > > - ODP_PRINT(" [%i] addr %p, id %"PRIu32"\n", i, > hdr->addr, > > - index); > > - } > > - > > - ODP_PRINT(" [%i] addr %p, id %"PRIu32"\n", i, > chunk_hdr->buf_hdr.addr, > > - chunk_hdr->buf_hdr.index); > > - > > - /* next chunk */ > > - chunk_hdr = next_chunk(pool, chunk_hdr); > > + uint32_t bufcount = odp_atomic_load_u32(&pool->s.bufcount); > > + uint32_t blkcount = odp_atomic_load_u32(&pool->s.blkcount); > > + uint64_t bufallocs = odp_atomic_load_u64(&pool->s.bufallocs); > > + uint64_t buffrees = odp_atomic_load_u64(&pool->s.buffrees); > > + uint64_t blkallocs = odp_atomic_load_u64(&pool->s.blkallocs); > > + uint64_t blkfrees = odp_atomic_load_u64(&pool->s.blkfrees); > > + uint64_t bufempty = odp_atomic_load_u64(&pool->s.bufempty); > > + uint64_t blkempty = odp_atomic_load_u64(&pool->s.blkempty); > > + uint64_t hiwmct = > odp_atomic_load_u64(&pool->s.high_wm_count); > > + uint64_t lowmct = > odp_atomic_load_u64(&pool->s.low_wm_count); > > + > > + ODP_DBG("Pool info\n"); > > + ODP_DBG("---------\n"); > > + ODP_DBG(" pool %i\n", pool->s.pool_hdl); > > + ODP_DBG(" name %s\n", > > + pool->s.flags.has_name ? pool->s.name > <http://s.name> : "Unnamed Pool"); > > + ODP_DBG(" pool type %s\n", > > + pool->s.params.buf_type == ODP_BUFFER_TYPE_RAW ? > "raw" : > > + (pool->s.params.buf_type == ODP_BUFFER_TYPE_PACKET > ? "packet" : > > + (pool->s.params.buf_type == ODP_BUFFER_TYPE_TIMEOUT > ? "timeout" : > > + (pool->s.params.buf_type == ODP_BUFFER_TYPE_ANY ? > "any" : > > + "unknown")))); > > + ODP_DBG(" pool storage %sODP managed\n", > > + pool->s.flags.user_supplied_shm ? > > + "application provided, " : ""); > > + ODP_DBG(" pool status %s\n", > > + pool->s.flags.quiesced ? "quiesced" : "active"); > > + ODP_DBG(" pool opts %s, %s, %s\n", > > + pool->s.flags.unsegmented ? "unsegmented" : > "segmented", > > + pool->s.flags.zeroized ? "zeroized" : "non-zeroized", > > + pool->s.flags.predefined ? "predefined" : "created"); > > + ODP_DBG(" pool base %p\n", pool->s.pool_base_addr); > > + ODP_DBG(" pool size %zu (%zu pages)\n", > > + pool->s.pool_size, pool->s.pool_size / ODP_PAGE_SIZE); > > + ODP_DBG(" udata size %zu\n", > pool->s.init_params.udata_size); > > + ODP_DBG(" buf size %zu\n", pool->s.params.buf_size); > > + ODP_DBG(" num bufs %u\n", pool->s.params.num_bufs); > > + ODP_DBG(" bufs in use %u\n", bufcount); > > + ODP_DBG(" buf allocs %lu\n", bufallocs); > > + ODP_DBG(" buf frees %lu\n", buffrees); > > + ODP_DBG(" buf empty %lu\n", bufempty); > > + ODP_DBG(" blk size %zu\n", > > + pool->s.seg_size > ODP_MAX_INLINE_BUF ? > pool->s.seg_size : 0); > > + ODP_DBG(" blks available %u\n", blkcount); > > + ODP_DBG(" blk allocs %lu\n", blkallocs); > > + ODP_DBG(" blk frees %lu\n", blkfrees); > > + ODP_DBG(" blk empty %lu\n", blkempty); > > + ODP_DBG(" high wm count %lu\n", hiwmct); > > + ODP_DBG(" low wm count %lu\n", lowmct); > > +} > > > > - if (chunk_hdr) { > > - ODP_PRINT(" Next chunk\n"); > > - ODP_PRINT(" addr %p, id %"PRIu32"\n", > chunk_hdr->buf_hdr.addr, > > - chunk_hdr->buf_hdr.index); > > - } > > > > - ODP_PRINT("\n"); > > +odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf) > > +{ > > + return odp_buf_to_hdr(buf)->pool_hdl; > > } > > diff --git a/platform/linux-generic/odp_packet.c > b/platform/linux-generic/odp_packet.c > > index f8fd8ef..8deae3d 100644 > > --- a/platform/linux-generic/odp_packet.c > > +++ b/platform/linux-generic/odp_packet.c > > @@ -23,17 +23,9 @@ static inline uint8_t > parse_ipv6(odp_packet_hdr_t *pkt_hdr, > > void odp_packet_init(odp_packet_t pkt) > > { > > odp_packet_hdr_t *const pkt_hdr = odp_packet_hdr(pkt); > > - const size_t start_offset = > ODP_FIELD_SIZEOF(odp_packet_hdr_t, buf_hdr); > > - uint8_t *start; > > - size_t len; > > - > > - start = (uint8_t *)pkt_hdr + start_offset; > > - len = ODP_OFFSETOF(odp_packet_hdr_t, buf_data) - start_offset; > > - memset(start, 0, len); > > + pool_entry_t *pool = odp_buf_to_pool(&pkt_hdr->buf_hdr); > > > > - pkt_hdr->l2_offset = ODP_PACKET_OFFSET_INVALID; > > - pkt_hdr->l3_offset = ODP_PACKET_OFFSET_INVALID; > > - pkt_hdr->l4_offset = ODP_PACKET_OFFSET_INVALID; > > + packet_init(pool, pkt_hdr, 0); > > } > > > > odp_packet_t odp_packet_from_buffer(odp_buffer_t buf) > > @@ -63,7 +55,7 @@ uint8_t *odp_packet_addr(odp_packet_t pkt) > > > > uint8_t *odp_packet_data(odp_packet_t pkt) > > { > > - return odp_packet_addr(pkt) + > odp_packet_hdr(pkt)->frame_offset; > > + return odp_packet_addr(pkt) + odp_packet_hdr(pkt)->headroom; > > } > > > > > > @@ -130,20 +122,13 @@ void odp_packet_set_l4_offset(odp_packet_t > pkt, size_t offset) > > > > int odp_packet_is_segmented(odp_packet_t pkt) > > { > > - odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)pkt); > > - > > - if (buf_hdr->scatter.num_bufs == 0) > > - return 0; > > - else > > - return 1; > > + return odp_packet_hdr(pkt)->buf_hdr.segcount > 1; > > } > > > > > > int odp_packet_seg_count(odp_packet_t pkt) > > { > > - odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)pkt); > > - > > - return (int)buf_hdr->scatter.num_bufs + 1; > > + return odp_packet_hdr(pkt)->buf_hdr.segcount; > > } > > > > > > @@ -169,7 +154,7 @@ void odp_packet_parse(odp_packet_t pkt, > size_t len, size_t frame_offset) > > uint8_t ip_proto = 0; > > > > pkt_hdr->input_flags.eth = 1; > > - pkt_hdr->frame_offset = frame_offset; > > + pkt_hdr->l2_offset = frame_offset; > > pkt_hdr->frame_len = len; > > > > if (len > ODPH_ETH_LEN_MAX) > > @@ -329,8 +314,6 @@ void odp_packet_print(odp_packet_t pkt) > > len += snprintf(&str[len], n-len, > > " output_flags 0x%x\n", > hdr->output_flags.all); > > len += snprintf(&str[len], n-len, > > - " frame_offset %u\n", hdr->frame_offset); > > - len += snprintf(&str[len], n-len, > > " l2_offset %u\n", hdr->l2_offset); > > len += snprintf(&str[len], n-len, > > " l3_offset %u\n", hdr->l3_offset); > > @@ -357,14 +340,13 @@ int odp_packet_copy(odp_packet_t pkt_dst, > odp_packet_t pkt_src) > > if (pkt_dst == ODP_PACKET_INVALID || pkt_src == > ODP_PACKET_INVALID) > > return -1; > > > > - if (pkt_hdr_dst->buf_hdr.size < > > - pkt_hdr_src->frame_len + pkt_hdr_src->frame_offset) > > + if (pkt_hdr_dst->buf_hdr.size < pkt_hdr_src->frame_len) > > return -1; > > > > /* Copy packet header */ > > start_dst = (uint8_t *)pkt_hdr_dst + start_offset; > > start_src = (uint8_t *)pkt_hdr_src + start_offset; > > - len = ODP_OFFSETOF(odp_packet_hdr_t, buf_data) - start_offset; > > + len = sizeof(odp_packet_hdr_t) - start_offset; > > memcpy(start_dst, start_src, len); > > > > /* Copy frame payload */ > > @@ -373,13 +355,6 @@ int odp_packet_copy(odp_packet_t pkt_dst, > odp_packet_t pkt_src) > > len = pkt_hdr_src->frame_len; > > memcpy(start_dst, start_src, len); > > > > - /* Copy useful things from the buffer header */ > > - pkt_hdr_dst->buf_hdr.cur_offset = > pkt_hdr_src->buf_hdr.cur_offset; > > - > > - /* Create a copy of the scatter list */ > > - odp_buffer_copy_scatter(odp_packet_to_buffer(pkt_dst), > > - odp_packet_to_buffer(pkt_src)); > > - > > return 0; > > } > > > > diff --git a/platform/linux-generic/odp_queue.c > b/platform/linux-generic/odp_queue.c > > index 1318bcd..b68a7c7 100644 > > --- a/platform/linux-generic/odp_queue.c > > +++ b/platform/linux-generic/odp_queue.c > > @@ -11,6 +11,7 @@ > > #include <odp_buffer.h> > > #include <odp_buffer_internal.h> > > #include <odp_buffer_pool_internal.h> > > +#include <odp_buffer_inlines.h> > > #include <odp_internal.h> > > #include <odp_shared_memory.h> > > #include <odp_schedule_internal.h> > > diff --git a/platform/linux-generic/odp_schedule.c > b/platform/linux-generic/odp_schedule.c > > index cc84e11..a8f1938 100644 > > --- a/platform/linux-generic/odp_schedule.c > > +++ b/platform/linux-generic/odp_schedule.c > > @@ -83,8 +83,8 @@ int odp_schedule_init_global(void) > > { > > odp_shm_t shm; > > odp_buffer_pool_t pool; > > - void *pool_base; > > int i, j; > > + odp_buffer_pool_param_t params; > > > > ODP_DBG("Schedule init ... "); > > > > @@ -99,20 +99,12 @@ int odp_schedule_init_global(void) > > return -1; > > } > > > > - shm = odp_shm_reserve("odp_sched_pool", > > - SCHED_POOL_SIZE, > ODP_CACHE_LINE_SIZE, 0); > > + params.buf_size = sizeof(queue_desc_t); > > + params.buf_align = ODP_CACHE_LINE_SIZE; > > + params.num_bufs = SCHED_POOL_SIZE/sizeof(queue_desc_t); > > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > > > - pool_base = odp_shm_addr(shm); > > - > > - if (pool_base == NULL) { > > - ODP_ERR("Schedule init: Shm reserve failed.\n"); > > - return -1; > > - } > > - > > - pool = odp_buffer_pool_create("odp_sched_pool", pool_base, > > - SCHED_POOL_SIZE, sizeof(queue_desc_t), > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_RAW); > > + pool = odp_buffer_pool_create("odp_sched_pool", > ODP_SHM_NULL, ¶ms); > > > > if (pool == ODP_BUFFER_POOL_INVALID) { > > ODP_ERR("Schedule init: Pool create failed.\n"); > > diff --git a/platform/linux-generic/odp_timer.c > b/platform/linux-generic/odp_timer.c > > index 313c713..914cb58 100644 > > --- a/platform/linux-generic/odp_timer.c > > +++ b/platform/linux-generic/odp_timer.c > > @@ -5,9 +5,10 @@ > > */ > > > > #include <odp_timer.h> > > -#include <odp_timer_internal.h> > > #include <odp_time.h> > > #include <odp_buffer_pool_internal.h> > > +#include <odp_buffer_inlines.h> > > +#include <odp_timer_internal.h> > > #include <odp_internal.h> > > #include <odp_atomic.h> > > #include <odp_spinlock.h> > > diff --git a/test/api_test/odp_timer_ping.c > b/test/api_test/odp_timer_ping.c > > index 7704181..1566f4f 100644 > > --- a/test/api_test/odp_timer_ping.c > > +++ b/test/api_test/odp_timer_ping.c > > @@ -319,9 +319,8 @@ int main(int argc ODP_UNUSED, char *argv[] > ODP_UNUSED) > > ping_arg_t pingarg; > > odp_queue_t queue; > > odp_buffer_pool_t pool; > > - void *pool_base; > > int i; > > - odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > if (odp_test_global_init() != 0) > > return -1; > > @@ -334,14 +333,14 @@ int main(int argc ODP_UNUSED, char *argv[] > ODP_UNUSED) > > /* > > * Create message pool > > */ > > - shm = odp_shm_reserve("msg_pool", > > - MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > - pool_base = odp_shm_addr(shm); > > - > > - pool = odp_buffer_pool_create("msg_pool", pool_base, > MSG_POOL_SIZE, > > - BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_RAW); > > + > > + params.buf_size = BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = MSG_POOL_SIZE/BUF_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > + > > + pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, > ¶ms); > > + > > if (pool == ODP_BUFFER_POOL_INVALID) { > > LOG_ERR("Pool create failed.\n"); > > return -1; > > diff --git a/test/validation/odp_crypto.c > b/test/validation/odp_crypto.c > > index 9342aca..e329b05 100644 > > --- a/test/validation/odp_crypto.c > > +++ b/test/validation/odp_crypto.c > > @@ -31,8 +31,7 @@ CU_SuiteInfo suites[] = { > > > > int main(void) > > { > > - odp_shm_t shm; > > - void *pool_base; > > + odp_buffer_pool_param_t params; > > odp_buffer_pool_t pool; > > odp_queue_t out_queue; > > > > @@ -42,21 +41,13 @@ int main(void) > > } > > odp_init_local(); > > > > - shm = odp_shm_reserve("shm_packet_pool", > > - SHM_PKT_POOL_SIZE, > > - ODP_CACHE_LINE_SIZE, 0); > > + params.buf_size = SHM_PKT_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_PACKET; > > > > - pool_base = odp_shm_addr(shm); > > - if (!pool_base) { > > - fprintf(stderr, "Packet pool allocation failed.\n"); > > - return -1; > > - } > > + pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, > ¶ms); > > > > - pool = odp_buffer_pool_create("packet_pool", pool_base, > > - SHM_PKT_POOL_SIZE, > > - SHM_PKT_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_PACKET); > > if (ODP_BUFFER_POOL_INVALID == pool) { > > fprintf(stderr, "Packet pool creation failed.\n"); > > return -1; > > @@ -67,20 +58,14 @@ int main(void) > > fprintf(stderr, "Crypto outq creation failed.\n"); > > return -1; > > } > > - shm = odp_shm_reserve("shm_compl_pool", > > - SHM_COMPL_POOL_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_SHM_SW_ONLY); > > - pool_base = odp_shm_addr(shm); > > - if (!pool_base) { > > - fprintf(stderr, "Completion pool allocation > failed.\n"); > > - return -1; > > - } > > - pool = odp_buffer_pool_create("compl_pool", pool_base, > > - SHM_COMPL_POOL_SIZE, > > - SHM_COMPL_POOL_BUF_SIZE, > > - ODP_CACHE_LINE_SIZE, > > - ODP_BUFFER_TYPE_RAW); > > + > > + params.buf_size = SHM_COMPL_POOL_BUF_SIZE; > > + params.buf_align = 0; > > + params.num_bufs = > SHM_COMPL_POOL_SIZE/SHM_COMPL_POOL_BUF_SIZE; > > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > + > > + pool = odp_buffer_pool_create("compl_pool", ODP_SHM_NULL, > ¶ms); > > + > > if (ODP_BUFFER_POOL_INVALID == pool) { > > fprintf(stderr, "Completion pool creation failed.\n"); > > return -1; > > diff --git a/test/validation/odp_queue.c > b/test/validation/odp_queue.c > > index 09dba0e..9d0f3d7 100644 > > --- a/test/validation/odp_queue.c > > +++ b/test/validation/odp_queue.c > > @@ -16,21 +16,14 @@ static int queue_contest = 0xff; > > static int test_odp_buffer_pool_init(void) > > { > > odp_buffer_pool_t pool; > > - void *pool_base; > > - odp_shm_t shm; > > + odp_buffer_pool_param_t params; > > > > - shm = odp_shm_reserve("msg_pool", > > - MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); > > + params.buf_size = 0; > > + params.buf_align = ODP_CACHE_LINE_SIZE; > > + params.num_bufs = 1024 * 10; > > + params.buf_type = ODP_BUFFER_TYPE_RAW; > > > > - pool_base = odp_shm_addr(shm); > > - > > - if (NULL == pool_base) { > > - printf("Shared memory reserve failed.\n"); > > - return -1; > > - } > > - > > - pool = odp_buffer_pool_create("msg_pool", pool_base, > MSG_POOL_SIZE, 0, > > - ODP_CACHE_LINE_SIZE, ODP_BUFFER_TYPE_RAW); > > + pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, > ¶ms); > > > > if (ODP_BUFFER_POOL_INVALID == pool) { > > printf("Pool create failed.\n"); > > -- > > 1.8.3.2 > > > > > > _______________________________________________ > > lng-odp mailing list > > lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org> > > http://lists.linaro.org/mailman/listinfo/lng-odp > >
On 12/03/2014 01:07 PM, Anders Roxell wrote: > Hi, > > This is the proposed way to break up your patch: > 1. break circular dependencies > 2. move inline functions to a new "odp_buffer_inlines.h" file. > 3. restructuring ODP buffer pool > 4. odp_buffer_pool_create > 5. odp_buffer_pool_destroy > 6. odp_buffer_pool_info I agree. It would be better to have refactoring first and then clean API changes. 1-3 patches may be squashed if changes are interdependent.
On Wed, Dec 3, 2014 at 5:07 AM, Anders Roxell <anders.roxell@linaro.org> wrote: > Hi, > > This is the proposed way to break up your patch: > 1. break circular dependencies > 2. move inline functions to a new "odp_buffer_inlines.h" file. > 3. restructuring ODP buffer pool > 4. odp_buffer_pool_create > 5. odp_buffer_pool_destroy > 6. odp_buffer_pool_info > Seriously, what is the benefit of this sort of slicing and dicing? The goal here is to get the code merged rather than to figure out how to package it according to some esthetic ideal. Once it's merged nobody is going to care about any of this. Are you seriously suggesting that the code is unreviewable unless done this way? There are lots of follow-on patches that I want to do but these are being delayed trying to get off the ground. This whole change is "Phase 1". Once that's in, the subsequent patches will be smaller and more focused, but this is an iterative process. Further slicing might be doable, but certain things like separating 3 and 4 is not possible since the current odp_buffer_pool_create() is intimately tied to the current structure, which is what is being replaced. I consider 1 and 2 as part of that restructure. The only reason why odp_buffer_pool_destroy() and odp_buffer_pool_info() can be separated is because they are new APIs, but again separating them into separate patches is basically wasted motion here since they are not optional pieces of the API, but the latest patch does that in the interest of being responsive to comments. I'd really prefer that we focus on reviewing patch contents rather than packaging, given that we're building ODP here, not making incremental changes to an established product. > see more comments inline. > > On 2 December 2014 at 22:50, Bill Fischofer <bill.fischofer@linaro.org> > wrote: > > > > > > On Tue, Dec 2, 2014 at 3:05 PM, Anders Roxell <anders.roxell@linaro.org> > > wrote: > >> > >> prefix this patch with: > >> api: ... > >> > >> On 2014-12-02 13:17, Bill Fischofer wrote: > >> > Restructure ODP buffer pool internals to support new APIs. > >> > >> The comment doesn't add any extra value from the short log. > >> "Modifys linux-generic, example and test to make them ready for adding > the > >> new odp_buffer_pool_create API" > > > > > > The comment is descriptive of what's in the patch. > > > >> > >> > >> > Implements new odp_buffer_pool_create() API. > >> > > >> > Signed-off-by: Bill Fischofer <bill.fischofer@linaro.org> > >> > --- > >> > example/generator/odp_generator.c | 19 +- > >> > example/ipsec/odp_ipsec.c | 57 +- > >> > example/l2fwd/odp_l2fwd.c | 19 +- > >> > example/odp_example/odp_example.c | 18 +- > >> > example/packet/odp_pktio.c | 19 +- > >> > example/timer/odp_timer_test.c | 13 +- > >> > .../linux-generic/include/api/odp_buffer_pool.h | 91 ++- > >> > platform/linux-generic/include/api/odp_config.h | 10 + > >> > .../linux-generic/include/api/odp_platform_types.h | 9 + > >> > >> Group stuff into odp_platform_types.h should be its own patch. > >> > > > > The change to odp_platform_types.h moves typedefs from > odp_shared_memory.h > > to break > > circular dependencies that would otherwise arise. As a result, this is > not > > separable from > > the rest of this patch. > > don't agree. > > > > > > >> > >> > .../linux-generic/include/api/odp_shared_memory.h | 10 +- > >> > .../linux-generic/include/odp_buffer_inlines.h | 157 +++++ > >> > >> Creating an inline file should be its own patch. > > > > > > No, it's not independent of the rest of these changes. This is a > > restructuring patch. The rule that > > you've promoted is that each patch can be applied independently. Trying > to > > make this it's own > > patch wouldn't follow that rule. > > Good that you are trying. > You are saying "ODP buffer pool restructure" in the short log, please > do that and *only* that in this patch then! > Do not add new APIs or change existing APIs, only restructure! > > > > >> > >> > >> > .../linux-generic/include/odp_buffer_internal.h | 137 ++-- > >> > .../include/odp_buffer_pool_internal.h | 278 ++++++-- > >> > .../linux-generic/include/odp_packet_internal.h | 50 +- > >> > .../linux-generic/include/odp_timer_internal.h | 11 +- > >> > platform/linux-generic/odp_buffer.c | 31 +- > >> > platform/linux-generic/odp_buffer_pool.c | 711 > >> > +++++++++------------ > >> > platform/linux-generic/odp_packet.c | 41 +- > >> > platform/linux-generic/odp_queue.c | 1 + > >> > platform/linux-generic/odp_schedule.c | 20 +- > >> > platform/linux-generic/odp_timer.c | 3 +- > >> > test/api_test/odp_timer_ping.c | 19 +- > >> > test/validation/odp_crypto.c | 43 +- > >> > test/validation/odp_queue.c | 19 +- > >> > 24 files changed, 1024 insertions(+), 762 deletions(-) > >> > create mode 100644 > platform/linux-generic/include/odp_buffer_inlines.h > >> > > >> > >> [...] > >> > >> > diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h > >> > b/platform/linux-generic/include/api/odp_buffer_pool.h > >> > index 30b83e0..7022daa 100644 > >> > --- a/platform/linux-generic/include/api/odp_buffer_pool.h > >> > +++ b/platform/linux-generic/include/api/odp_buffer_pool.h > >> > @@ -36,32 +36,101 @@ extern "C" { > >> > #define ODP_BUFFER_POOL_INVALID 0 > >> > > >> > /** > >> > + * Buffer pool parameters > >> > + * Used to communicate buffer pool creation options. > >> > + */ > >> > +typedef struct odp_buffer_pool_param_t { > >> > + size_t buf_size; /**< Buffer size in bytes. The maximum > >> > + number of bytes application will > >> > >> "...bytes the application..." > > > > > > The definite article is optional in english grammar here. This level of > > nit-picking isn't > > needed. > > yes, its a nit that you can fix when you sen version 5 or whatever > version you will send out. > You misunderstood. As written it's perfectly valid standard English. English has more than one way of saying things. Are you saying you were unable to understand the comment? I appreciate that you might have chosen to write it differently, but you didn't write it. > > > >> > >> > >> > + store in each buffer. */ > >> > + size_t buf_align; /**< Minimum buffer alignment in bytes. > >> > + Valid values are powers of two. Use 0 > >> > + for default alignment. Default will > >> > + always be a multiple of 8. */ > >> > + uint32_t num_bufs; /**< Number of buffers in the pool */ > >> > + int buf_type; /**< Buffer type */ > >> > +} odp_buffer_pool_param_t; > >> > + > >> > +/** > >> > * Create a buffer pool > >> > + * This routine is used to create a buffer pool. It take three > >> > + * arguments: the optional name of the pool to be created, an > optional > >> > shared > >> > + * memory handle, and a parameter struct that describes the pool to > be > >> > + * created. If a name is not specified the result is an anonymous > pool > >> > that > >> > + * cannot be referenced by odp_buffer_pool_lookup(). > >> > * > >> > - * @param name Name of the pool (max ODP_BUFFER_POOL_NAME_LEN - > 1 > >> > chars) > >> > - * @param base_addr Pool base address > >> > - * @param size Pool size in bytes > >> > - * @param buf_size Buffer size in bytes > >> > - * @param buf_align Minimum buffer alignment > >> > - * @param buf_type Buffer type > >> > + * @param[in] name Name of the pool, max > ODP_BUFFER_POOL_NAME_LEN-1 > >> > chars. > >> > + * May be specified as NULL for anonymous pools. > >> > * > >> > - * @return Buffer pool handle > >> > + * @param[in] shm The shared memory object in which to create > the > >> > pool. > >> > + * Use ODP_SHM_NULL to reserve default memory > type > >> > + * for the buffer type. > >> > + * > >> > + * @param[in] params Buffer pool parameters. > >> > + * > >> > + * @return Buffer pool handle or ODP_BUFFER_POOL_INVALID if call > >> > failed. > >> > >> Should be > >> @retval Buffer pool handle on success > >> @retval ODP_BUFFER_POOL_INVALID if call failed 1 (if it can fail list > the > >> reasons) > >> @retval ODP_BUFFER_POOL_INVALID if call failed 2 (if it can fail list > the > >> reasons) > >> @retval ODP_BUFFER_POOL_INVALID if call failed N > > > > > > The documentation is consistent with that used in the rest of the file. > If > > we want a doc cleanup patch > > that should be a separate patch and cover the whole file, not just one > > routine that would otherwise stand > > out as an anomaly. I'll be happy to write that after this patch gets > > merged. > > Wasn't this a "ODP buffer pool restructure" patch, I would say that > this goes under restructure and or maybe it goes under a new patch > "api: change odp_buffer_pool_create" =) > As soon as this patch is merged I'll submit a patch to change all of the docs. I'll expect it to be approved quickly. :) > > > > >> > >> > >> > */ > >> > + > >> > odp_buffer_pool_t odp_buffer_pool_create(const char *name, > >> > - void *base_addr, uint64_t size, > >> > - size_t buf_size, size_t > >> > buf_align, > >> > - int buf_type); > >> > + odp_shm_t shm, > >> > + odp_buffer_pool_param_t > *params); > >> > > >> > +/** > >> > + * Destroy a buffer pool previously created by > odp_buffer_pool_create() > >> > + * > >> > + * @param[in] pool Handle of the buffer pool to be destroyed > >> > + * > >> > + * @return 0 on Success, -1 on Failure. > >> > >> use @retval here as well and list the reasons how it can fail.] > > > > > > Same comment as above. > > I'm going to copy you here: > "Same comment as above." =) > > > > >> > >> > >> > + * > >> > + * @note This routine destroys a previously created buffer pool. This > >> > call > >> > + * does not destroy any shared memory object passed to > >> > + * odp_buffer_pool_create() used to store the buffer pool contents. > The > >> > caller > >> > + * takes responsibility for that. If no shared memory object was > passed > >> > as > >> > + * part of the create call, then this routine will destroy any > internal > >> > shared > >> > + * memory objects associated with the buffer pool. Results are > >> > undefined if > >> > + * an attempt is made to destroy a buffer pool that contains > allocated > >> > or > >> > + * otherwise active buffers. > >> > + */ > >> > +int odp_buffer_pool_destroy(odp_buffer_pool_t pool); > >> > >> This doesn't belong in this patch, belongs in the > >> odp_buffer_pool_destroy patch. > >> > > > > That patch is for the implementation of the function, as described. > This is > > benign here. > > > >> > >> > > >> > /** > >> > * Find a buffer pool by name > >> > * > >> > - * @param name Name of the pool > >> > + * @param[in] name Name of the pool > >> > * > >> > * @return Buffer pool handle, or ODP_BUFFER_POOL_INVALID if not > found. > >> > >> Fix this. > > > > > > Same comments as above. > > > >> > >> > >> > + * > >> > + * @note This routine cannot be used to look up an anonymous pool > (one > >> > created > >> > + * with no name). > >> > >> How can I delete an anonymous pool? > > > > > > You can't. This is just implementing what's been specified. If we want > to > > change the spec > > that can be addressed in a follow-on patch. > > Ok I didn't know thank you for the explanation. > You're welcome. > > > > >> > >> > >> > */ > >> > odp_buffer_pool_t odp_buffer_pool_lookup(const char *name); > >> > > >> > +/** > >> > + * Buffer pool information struct > >> > + * Used to get information about a buffer pool. > >> > + */ > >> > +typedef struct odp_buffer_pool_info_t { > >> > + const char *name; /**< pool name */ > >> > + odp_buffer_pool_param_t params; /**< pool parameters */ > >> > +} odp_buffer_pool_info_t; > >> > + > >> > +/** > >> > + * Retrieve information about a buffer pool > >> > + * > >> > + * @param[in] pool Buffer pool handle > >> > + * > >> > + * @param[out] shm Recieves odp_shm_t supplied by caller at > >> > + * pool creation, or ODP_SHM_NULL if the > >> > + * pool is managed internally. > >> > + * > >> > + * @param[out] info Receives an odp_buffer_pool_info_t object > >> > + * that describes the pool. > >> > + * > >> > + * @return 0 on success, -1 if info could not be retrieved. > >> > >> Fix > > > > > > Same doc comments as above. > > > >> > >> > >> > + */ > >> > + > >> > +int odp_buffer_pool_info(odp_buffer_pool_t pool, odp_shm_t *shm, > >> > + odp_buffer_pool_info_t *info); > >> > >> This doesn't belong in this patch, belongs in the > >> odp_buffer_pool_info patch. > >> > >> Again, the separate patch implements these functions. These are benign. > > > > > >> > >> > > >> > /** > >> > * Print buffer pool info > >> > diff --git a/platform/linux-generic/include/api/odp_config.h > >> > b/platform/linux-generic/include/api/odp_config.h > >> > index 906897c..1226d37 100644 > >> > --- a/platform/linux-generic/include/api/odp_config.h > >> > +++ b/platform/linux-generic/include/api/odp_config.h > >> > @@ -49,6 +49,16 @@ extern "C" { > >> > #define ODP_CONFIG_PKTIO_ENTRIES 64 > >> > > >> > /** > >> > + * Segment size to use - > >> > >> What does "-" mean? > >> Can you elaborate more on this? > > > > > > It's a stray character. > > gah, I'm sorry for beeing unclear. > I meant "-" remove! > > and can you elaborate more and not only say "Segment size to use". > > > > >> > >> > >> > + */ > >> > +#define ODP_CONFIG_BUF_SEG_SIZE (512*3) > >> > + > >> > +/** > >> > + * Maximum buffer size supported > >> > + */ > >> > +#define ODP_CONFIG_BUF_MAX_SIZE (ODP_CONFIG_BUF_SEG_SIZE*7) > >> > >> Isn't this platform specific? > > > > > > Yes, and this is platform/linux-generic. I've chosen this for now > because > > the current linux-generic > > packet I/O doesn't support scatter/gather reads/writes. > > Bill I know this is linux-generic, I was unclear again. > > Why do you place this in odp_config.h and not in odp_platform_types.h? > odp_platform_types.h is for typedefs. These are implementation limits, like number of buffer pools we support, etc. It's the proper file for these sort of limits since you can change variables here and get a different configuration of linux-generic. > > > > >> > >> > >> > + > >> > +/** > >> > * @} > >> > */ > >> > > >> > diff --git a/platform/linux-generic/include/api/odp_platform_types.h > >> > b/platform/linux-generic/include/api/odp_platform_types.h > >> > index 4db47d3..b9b3aea 100644 > >> > --- a/platform/linux-generic/include/api/odp_platform_types.h > >> > +++ b/platform/linux-generic/include/api/odp_platform_types.h > >> > @@ -65,6 +65,15 @@ typedef uint32_t odp_pktio_t; > >> > #define ODP_PKTIO_ANY ((odp_pktio_t)~0) > >> > > >> > /** > >> > + * ODP shared memory block > >> > + */ > >> > +typedef uint32_t odp_shm_t; > >> > + > >> > +/** Invalid shared memory block */ > >> > +#define ODP_SHM_INVALID 0 > >> > +#define ODP_SHM_NULL ODP_SHM_INVALID /**< Synonym for buffer pool use > >> > */ > >> > >> ODP_SHM_* touches shm functionality and should be in its own patch to > >> fix/move it. > > > > > > Already discussed above. > >> > >> > >> > + > >> > +/** > >> > * @} > >> > */ > >> > > >> > diff --git a/platform/linux-generic/include/api/odp_shared_memory.h > >> > b/platform/linux-generic/include/api/odp_shared_memory.h > >> > index 26e208b..f70db5a 100644 > >> > --- a/platform/linux-generic/include/api/odp_shared_memory.h > >> > +++ b/platform/linux-generic/include/api/odp_shared_memory.h > >> > @@ -20,6 +20,7 @@ extern "C" { > >> > > >> > > >> > #include <odp_std_types.h> > >> > +#include <odp_platform_types.h> > >> > >> Not relevant for the odp_buffer_pool_create > > > > > > Incorrect. It is part of the restructure for reasons discussed above. > > OK, for restructuring but not for odp_buffer_pool_create =) > > > > >> > >> > >> > > >> > /** @defgroup odp_shared_memory ODP SHARED MEMORY > >> > * Operations on shared memory. > >> > @@ -38,15 +39,6 @@ extern "C" { > >> > #define ODP_SHM_PROC 0x2 /**< Share with external processes */ > >> > > >> > /** > >> > - * ODP shared memory block > >> > - */ > >> > -typedef uint32_t odp_shm_t; > >> > - > >> > -/** Invalid shared memory block */ > >> > -#define ODP_SHM_INVALID 0 > >> > - > >> > - > >> > -/** > >> > * Shared memory block info > >> > */ > >> > typedef struct odp_shm_info_t { > >> > diff --git a/platform/linux-generic/include/odp_buffer_inlines.h > >> > b/platform/linux-generic/include/odp_buffer_inlines.h > >> > new file mode 100644 > >> > index 0000000..f33b41d > >> > --- /dev/null > >> > +++ b/platform/linux-generic/include/odp_buffer_inlines.h > >> > @@ -0,0 +1,157 @@ > >> > +/* Copyright (c) 2014, Linaro Limited > >> > + * All rights reserved. > >> > + * > >> > + * SPDX-License-Identifier: BSD-3-Clause > >> > + */ > >> > + > >> > +/** > >> > + * @file > >> > + * > >> > + * Inline functions for ODP buffer mgmt routines - implementation > >> > internal > >> > + */ > >> > + > >> > +#ifndef ODP_BUFFER_INLINES_H_ > >> > +#define ODP_BUFFER_INLINES_H_ > >> > + > >> > +#ifdef __cplusplus > >> > +extern "C" { > >> > +#endif > >> > + > >> > +static inline odp_buffer_t odp_buffer_encode_handle(odp_buffer_hdr_t > >> > *hdr) > >> > +{ > >> > + odp_buffer_bits_t handle; > >> > + uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl); > >> > + struct pool_entry_s *pool = get_pool_entry(pool_id); > >> > + > >> > + handle.pool_id = pool_id; > >> > + handle.index = ((uint8_t *)hdr - pool->pool_base_addr) / > >> > + ODP_CACHE_LINE_SIZE; > >> > + handle.seg = 0; > >> > + > >> > + return handle.u32; > >> > +} > >> > + > >> > +static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) > >> > +{ > >> > + odp_buffer_t hdl = odp_buffer_encode_handle(hdr); > >> > + if (hdl != hdr->handle.handle) { > >> > + ODP_DBG("buf %p should have handle %x but is cached as > >> > %x\n", > >> > + hdr, hdl, hdr->handle.handle); > >> > + hdr->handle.handle = hdl; > >> > + } > >> > + return hdr->handle.handle; > >> > +} > >> > + > >> > +static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > >> > +{ > >> > + odp_buffer_bits_t handle; > >> > + uint32_t pool_id; > >> > + uint32_t index; > >> > + struct pool_entry_s *pool; > >> > + > >> > + handle.u32 = buf; > >> > + pool_id = handle.pool_id; > >> > + index = handle.index; > >> > + > >> > +#ifdef POOL_ERROR_CHECK > >> > + if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { > >> > + ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); > >> > + return NULL; > >> > + } > >> > +#endif > >> > + > >> > + pool = get_pool_entry(pool_id); > >> > + > >> > +#ifdef POOL_ERROR_CHECK > >> > + if (odp_unlikely(index > pool->params.num_bufs - 1)) { > >> > + ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); > >> > + return NULL; > >> > + } > >> > +#endif > >> > + > >> > + return (odp_buffer_hdr_t *)(void *) > >> > + (pool->pool_base_addr + (index * ODP_CACHE_LINE_SIZE)); > >> > +} > >> > + > >> > +static inline uint32_t odp_buffer_refcount(odp_buffer_hdr_t *buf) > >> > +{ > >> > + return odp_atomic_load_u32(&buf->ref_count); > >> > +} > >> > + > >> > +static inline uint32_t odp_buffer_incr_refcount(odp_buffer_hdr_t > *buf, > >> > + uint32_t val) > >> > +{ > >> > + return odp_atomic_fetch_add_u32(&buf->ref_count, val) + val; > >> > +} > >> > + > >> > +static inline uint32_t odp_buffer_decr_refcount(odp_buffer_hdr_t > *buf, > >> > + uint32_t val) > >> > +{ > >> > + uint32_t tmp; > >> > + > >> > + tmp = odp_atomic_fetch_sub_u32(&buf->ref_count, val); > >> > + > >> > + if (tmp < val) { > >> > + odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); > >> > + return 0; > >> > + } else { > >> > >> drop the else statement > > > > > > That would be erroneous code. Refcounts don't go below 0. This code > > ensures that. > > Bill, I was unclear again. > I thought you understood that I meant only remove "else" and move out > return on tab! > > like this: > > if (tmp < val) { > odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); > return 0; > } > return tmp - val; > De gustibus non est disputandum. Actually, having the else makes the code clearer and less prone to error introduction in future updates. I'm sure you'll agree that there is no performance difference between the two. > > > > >> > >> > >> > + return tmp - val; > >> > + } > >> > +} > >> > + > >> > +static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) > >> > +{ > >> > + odp_buffer_bits_t handle; > >> > + odp_buffer_hdr_t *buf_hdr; > >> > + handle.u32 = buf; > >> > + > >> > + /* For buffer handles, segment index must be 0 */ > >> > >> Why does the buffer handle always have to have a segment index that must > >> be 0? > > > > > > Because that's how I've defined it in this implementation. > > validate_buffer() can be > > given any 32-bit value and it will robustly say whether or not it is a > valid > > buffer handle. > > hmm... OK, I will look again > > > > >> > >> > >> > + if (handle.seg != 0) > >> > + return NULL; > >> > >> Why do we need to check everything? > >> shouldn't we trust our internal stuff to be sent correctly? > >> Maybe it should be an ODP_ASSERT? > > > > > > No, odp_buffer_is_valid() does not assert. It returns a yes/no value for > > any > > input value. > > > >> > >> > >> > + > >> > + pool_entry_t *pool = odp_pool_to_entry(handle.pool_id); > >> > + > >> > + /* If pool not created, handle is invalid */ > >> > + if (pool->s.pool_shm == ODP_SHM_INVALID) > >> > + return NULL; > >> > >> The same applies here. > > > > > > Same answer. > > > >> > >> > >> > + > >> > + uint32_t buf_stride = pool->s.buf_stride / ODP_CACHE_LINE_SIZE; > >> > + > >> > + /* A valid buffer index must be on stride, and must be in range > */ > >> > + if ((handle.index % buf_stride != 0) || > >> > + ((uint32_t)(handle.index / buf_stride) >= > >> > pool->s.params.num_bufs)) > >> > + return NULL; > >> > + > >> > + buf_hdr = (odp_buffer_hdr_t *)(void *) > >> > + (pool->s.pool_base_addr + > >> > + (handle.index * ODP_CACHE_LINE_SIZE)); > >> > + > >> > + /* Handle is valid, so buffer is valid if it is allocated */ > >> > + if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) > >> > + return NULL; > >> > + else > >> > >> Drop the else > > > > > > No, that would be erroneous. A buffer handle is no longer valid if > > the buffer has been freed. That's what's being checked here. > > again: > /* Handle is valid, so buffer is valid if it is allocated */ > if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) > return NULL; > return buf_hdr; > > Same comment as above. If I didn't think having the else here was clearer I would not have written the code that way. The style passes checkpatch, which should be sufficient for reviewers. > > > > >> > >> > >> > + return buf_hdr; > >> > +} > >> > + > >> > +int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > >> > + > >> > +static inline void *buffer_map(odp_buffer_hdr_t *buf, > >> > + size_t offset, > >> > + size_t *seglen, > >> > + size_t limit) > >> > +{ > >> > + int seg_index = offset / buf->segsize; > >> > + int seg_offset = offset % buf->segsize; > >> > + size_t buf_left = limit - offset; > >> > + > >> > + *seglen = buf_left < buf->segsize ? > >> > + buf_left : buf->segsize - seg_offset; > >> > + > >> > + return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]); > >> > +} > >> > + > >> > +#ifdef __cplusplus > >> > +} > >> > +#endif > >> > + > >> > +#endif > >> > diff --git a/platform/linux-generic/include/odp_buffer_internal.h > >> > b/platform/linux-generic/include/odp_buffer_internal.h > >> > index 0027bfc..29666db 100644 > >> > --- a/platform/linux-generic/include/odp_buffer_internal.h > >> > +++ b/platform/linux-generic/include/odp_buffer_internal.h > >> > @@ -24,99 +24,118 @@ extern "C" { > >> > #include <odp_buffer.h> > >> > #include <odp_debug.h> > >> > #include <odp_align.h> > >> > - > >> > -/* TODO: move these to correct files */ > >> > - > >> > -typedef uint64_t odp_phys_addr_t; > >> > - > >> > -#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > >> > -#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > >> > - > >> > -#define ODP_BUFS_PER_CHUNK 16 > >> > -#define ODP_BUFS_PER_SCATTER 4 > >> > - > >> > -#define ODP_BUFFER_TYPE_CHUNK 0xffff > >> > - > >> > +#include <odp_config.h> > >> > +#include <odp_byteorder.h> > >> > +#include <odp_thread.h> > >> > + > >> > + > >> > +#define ODP_BUFFER_MAX_SEG > >> > (ODP_CONFIG_BUF_MAX_SIZE/ODP_CONFIG_BUF_SEG_SIZE) > >> > +#define ODP_MAX_INLINE_BUF (sizeof(void *) * (ODP_BUFFER_MAX_SEG > - > >> > 1)) > >> > + > >> > +ODP_STATIC_ASSERT((ODP_CONFIG_BUF_SEG_SIZE % ODP_CACHE_LINE_SIZE) == > 0, > >> > + "ODP Segment size must be a multiple of cache line > >> > size"); > >> > + > >> > +#define ODP_SEGBITS(x) \ > >> > + ((x) < 2 ? 1 : \ > >> > + ((x) < 4 ? 2 : \ > >> > + ((x) < 8 ? 3 : \ > >> > + ((x) < 16 ? 4 : \ > >> > + ((x) < 32 ? 5 : \ > >> > + ((x) < 64 ? 6 : \ > >> > >> Do you need to add the tab "6 :<tab>\" > > > > > > I'm not sure I understand the comment. > > fix your editor please! > I'm using emacs with style = linux. > > > > >> > >> > >> > + ((x) < 128 ? 7 : \ > >> > + ((x) < 256 ? 8 : \ > >> > + ((x) < 512 ? 9 : \ > >> > + ((x) < 1024 ? 10 : \ > >> > + ((x) < 2048 ? 11 : \ > >> > + ((x) < 4096 ? 12 : \ > >> > + (0/0))))))))))))) > >> > + > >> > +ODP_STATIC_ASSERT(ODP_SEGBITS(ODP_BUFFER_MAX_SEG) < > >> > + ODP_SEGBITS(ODP_CACHE_LINE_SIZE), > >> > + "Number of segments must not exceed log of cache line > >> > size"); > >> > > >> > #define ODP_BUFFER_POOL_BITS 4 > >> > -#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS) > >> > +#define ODP_BUFFER_SEG_BITS ODP_SEGBITS(ODP_CACHE_LINE_SIZE) > >> > +#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS - > >> > ODP_BUFFER_SEG_BITS) > >> > +#define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS + > >> > ODP_BUFFER_INDEX_BITS) > >> > #define ODP_BUFFER_MAX_POOLS (1 << ODP_BUFFER_POOL_BITS) > >> > #define ODP_BUFFER_MAX_BUFFERS (1 << ODP_BUFFER_INDEX_BITS) > >> > > >> > +#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > >> > +#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > >> > + > >> > typedef union odp_buffer_bits_t { > >> > uint32_t u32; > >> > odp_buffer_t handle; > >> > > >> > struct { > >> > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > >> > uint32_t pool_id:ODP_BUFFER_POOL_BITS; > >> > uint32_t index:ODP_BUFFER_INDEX_BITS; > >> > + uint32_t seg:ODP_BUFFER_SEG_BITS; > >> > +#else > >> > + uint32_t seg:ODP_BUFFER_SEG_BITS; > >> > + uint32_t index:ODP_BUFFER_INDEX_BITS; > >> > + uint32_t pool_id:ODP_BUFFER_POOL_BITS; > >> > +#endif > >> > >> and this will work on 64bit platforms? > > > > > > Yes. I'm developing on a 64-bit platform. > > OK > > > > >> > >> > >> > }; > >> > -} odp_buffer_bits_t; > >> > > >> > + struct { > >> > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > >> > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > >> > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > >> > +#else > >> > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > >> > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > >> > +#endif > >> > + }; > >> > +} odp_buffer_bits_t; > >> > > >> > /* forward declaration */ > >> > struct odp_buffer_hdr_t; > >> > > >> > - > >> > -/* > >> > - * Scatter/gather list of buffers > >> > - */ > >> > -typedef struct odp_buffer_scatter_t { > >> > - /* buffer pointers */ > >> > - struct odp_buffer_hdr_t *buf[ODP_BUFS_PER_SCATTER]; > >> > - int num_bufs; /* num buffers */ > >> > - int pos; /* position on the list */ > >> > - size_t total_len; /* Total length */ > >> > -} odp_buffer_scatter_t; > >> > - > >> > - > >> > -/* > >> > - * Chunk of buffers (in single pool) > >> > - */ > >> > -typedef struct odp_buffer_chunk_t { > >> > - uint32_t num_bufs; /* num buffers */ > >> > - uint32_t buf_index[ODP_BUFS_PER_CHUNK]; /* buffers */ > >> > -} odp_buffer_chunk_t; > >> > - > >> > - > >> > /* Common buffer header */ > >> > typedef struct odp_buffer_hdr_t { > >> > struct odp_buffer_hdr_t *next; /* next buf in a list */ > >> > + int allocator; /* allocating thread id */ > >> > odp_buffer_bits_t handle; /* handle */ > >> > - odp_phys_addr_t phys_addr; /* physical data start > >> > address */ > >> > - void *addr; /* virtual data start > address > >> > */ > >> > - uint32_t index; /* buf index in the pool */ > >> > + union { > >> > + uint32_t all; > >> > + struct { > >> > + uint32_t zeroized:1; /* Zeroize buf data on free > >> > */ > >> > + uint32_t hdrdata:1; /* Data is in buffer hdr */ > >> > + }; > >> > + } flags; > >> > + int type; /* buffer type */ > >> > size_t size; /* max data size */ > >> > - size_t cur_offset; /* current offset */ > >> > odp_atomic_u32_t ref_count; /* reference count */ > >> > - odp_buffer_scatter_t scatter; /* Scatter/gather list */ > >> > - int type; /* type of next header */ > >> > odp_buffer_pool_t pool_hdl; /* buffer pool handle */ > >> > - > >> > + union { > >> > + void *buf_ctx; /* user context */ > >> > + void *udata_addr; /* user metadata addr */ > >> > + }; > >> > + size_t udata_size; /* size of user metadata */ > >> > + uint32_t segcount; /* segment count */ > >> > + uint32_t segsize; /* segment size */ > >> > + void *addr[ODP_BUFFER_MAX_SEG]; /* block > addrs > >> > */ > >> > } odp_buffer_hdr_t; > >> > > >> > -/* Ensure next header starts from 8 byte align */ > >> > -ODP_STATIC_ASSERT((sizeof(odp_buffer_hdr_t) % 8) == 0, > >> > "ODP_BUFFER_HDR_T__SIZE_ERROR"); > >> > +typedef struct odp_buffer_hdr_stride { > >> > + uint8_t > >> > pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t))]; > >> > +} odp_buffer_hdr_stride; > >> > > >> > +typedef struct odp_buf_blk_t { > >> > + struct odp_buf_blk_t *next; > >> > + struct odp_buf_blk_t *prev; > >> > +} odp_buf_blk_t; > >> > > >> > /* Raw buffer header */ > >> > typedef struct { > >> > odp_buffer_hdr_t buf_hdr; /* common buffer header */ > >> > - uint8_t buf_data[]; /* start of buffer data area */ > >> > } odp_raw_buffer_hdr_t; > >> > > >> > - > >> > -/* Chunk header */ > >> > -typedef struct odp_buffer_chunk_hdr_t { > >> > - odp_buffer_hdr_t buf_hdr; > >> > - odp_buffer_chunk_t chunk; > >> > -} odp_buffer_chunk_hdr_t; > >> > - > >> > - > >> > -int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > >> > - > >> > -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t > >> > buf_src); > >> > - > >> > +/* Forward declarations */ > >> > +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool, size_t size); > >> > > >> > #ifdef __cplusplus > >> > } > >> > diff --git a/platform/linux-generic/include/odp_buffer_pool_internal.h > >> > b/platform/linux-generic/include/odp_buffer_pool_internal.h > >> > index e0210bd..cd58f91 100644 > >> > --- a/platform/linux-generic/include/odp_buffer_pool_internal.h > >> > +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h > >> > @@ -25,6 +25,35 @@ extern "C" { > >> > #include <odp_hints.h> > >> > #include <odp_config.h> > >> > #include <odp_debug.h> > >> > +#include <odp_shared_memory.h> > >> > +#include <odp_atomic.h> > >> > +#include <odp_atomic_internal.h> > >> > +#include <string.h> > >> > + > >> > +/** > >> > + * Buffer initialization routine prototype > >> > + * > >> > + * @note Routines of this type MAY be passed as part of the > >> > + * _odp_buffer_pool_init_t structure to be called whenever a > >> > + * buffer is allocated to initialize the user metadata > >> > + * associated with that buffer. > >> > + */ > >> > +typedef void (_odp_buf_init_t)(odp_buffer_t buf, void *buf_init_arg); > >> > + > >> > +/** > >> > + * Buffer pool initialization parameters > >> > + * > >> > + * @param[in] udata_size Size of the user metadata for each > buffer > >> > + * @param[in] buf_init Function pointer to be called to > >> > initialize the > >> > + * user metadata for each buffer in the > pool. > >> > + * @param[in] buf_init_arg Argument to be passed to buf_init(). > >> > + * > >> > + */ > >> > +typedef struct _odp_buffer_pool_init_t { > >> > + size_t udata_size; /**< Size of user metadata for each > >> > buffer */ > >> > + _odp_buf_init_t *buf_init; /**< Buffer initialization routine to > >> > use */ > >> > + void *buf_init_arg; /**< Argument to be passed to > >> > buf_init() */ > >> > +} _odp_buffer_pool_init_t; /**< Type of buffer initialization > >> > struct */ > >> > > >> > /* Use ticketlock instead of spinlock */ > >> > #define POOL_USE_TICKETLOCK > >> > @@ -39,6 +68,17 @@ extern "C" { > >> > #include <odp_spinlock.h> > >> > #endif > >> > > >> > +#ifdef POOL_USE_TICKETLOCK > >> > +#include <odp_ticketlock.h> > >> > +#define LOCK(a) odp_ticketlock_lock(a) > >> > +#define UNLOCK(a) odp_ticketlock_unlock(a) > >> > +#define LOCK_INIT(a) odp_ticketlock_init(a) > >> > +#else > >> > +#include <odp_spinlock.h> > >> > +#define LOCK(a) odp_spinlock_lock(a) > >> > +#define UNLOCK(a) odp_spinlock_unlock(a) > >> > +#define LOCK_INIT(a) odp_spinlock_init(a) > >> > +#endif > >> > > >> > struct pool_entry_s { > >> > #ifdef POOL_USE_TICKETLOCK > >> > @@ -47,66 +87,224 @@ struct pool_entry_s { > >> > odp_spinlock_t lock ODP_ALIGNED_CACHE; > >> > #endif > >> > > >> > - odp_buffer_chunk_hdr_t *head; > >> > - uint64_t free_bufs; > >> > char name[ODP_BUFFER_POOL_NAME_LEN]; > >> > - > >> > - odp_buffer_pool_t pool_hdl ODP_ALIGNED_CACHE; > >> > - uintptr_t buf_base; > >> > - size_t buf_size; > >> > - size_t buf_offset; > >> > - uint64_t num_bufs; > >> > - void *pool_base_addr; > >> > - uint64_t pool_size; > >> > - size_t user_size; > >> > - size_t user_align; > >> > - int buf_type; > >> > - size_t hdr_size; > >> > + odp_buffer_pool_param_t params; > >> > + _odp_buffer_pool_init_t init_params; > >> > + odp_buffer_pool_t pool_hdl; > >> > + odp_shm_t pool_shm; > >> > + union { > >> > + uint32_t all; > >> > + struct { > >> > + uint32_t has_name:1; > >> > + uint32_t user_supplied_shm:1; > >> > + uint32_t unsegmented:1; > >> > + uint32_t zeroized:1; > >> > + uint32_t quiesced:1; > >> > + uint32_t low_wm_assert:1; > >> > + uint32_t predefined:1; > >> > + }; > >> > + } flags; > >> > + uint8_t *pool_base_addr; > >> > + size_t pool_size; > >> > + uint32_t buf_stride; > >> > + _odp_atomic_ptr_t buf_freelist; > >> > + _odp_atomic_ptr_t blk_freelist; > >> > + odp_atomic_u32_t bufcount; > >> > + odp_atomic_u32_t blkcount; > >> > + odp_atomic_u64_t bufallocs; > >> > + odp_atomic_u64_t buffrees; > >> > + odp_atomic_u64_t blkallocs; > >> > + odp_atomic_u64_t blkfrees; > >> > + odp_atomic_u64_t bufempty; > >> > + odp_atomic_u64_t blkempty; > >> > + odp_atomic_u64_t high_wm_count; > >> > + odp_atomic_u64_t low_wm_count; > >> > + size_t seg_size; > >> > + size_t high_wm; > >> > + size_t low_wm; > >> > + size_t headroom; > >> > + size_t tailroom; > >> > >> General comment add the same level of information into the variable > >> names. > >> > >> Not consistent use "_" used to separate words in variable names. > >> > > > > These are internal structs. Not relevant. > > so you mean that we shouldn't review internal code and > that its OK to bi inconsistent because its internal code? > I don't follow you here. You don't like the choice of variable names in the struct? > > > > >> > >> > >> > >> > }; > >> > > >> > +typedef union pool_entry_u { > >> > + struct pool_entry_s s; > >> > + > >> > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct > >> > pool_entry_s))]; > >> > +} pool_entry_t; > >> > > >> > extern void *pool_entry_ptr[]; > >> > > >> > +#if defined(ODP_CONFIG_SECURE_POOLS) && (ODP_CONFIG_SECURE_POOLS == > 1) > >> > +#define buffer_is_secure(buf) (buf->flags.zeroized) > >> > +#define pool_is_secure(pool) (pool->flags.zeroized) > >> > +#else > >> > +#define buffer_is_secure(buf) 0 > >> > +#define pool_is_secure(pool) 0 > >> > +#endif > >> > + > >> > +#define TAG_ALIGN ((size_t)16) > >> > > >> > -static inline void *get_pool_entry(uint32_t pool_id) > >> > +#define odp_cs(ptr, old, new) \ > >> > + _odp_atomic_ptr_cmp_xchg_strong(&ptr, (void **)&old, (void > *)new, > >> > \ > >> > + _ODP_MEMMODEL_SC, \ > >> > + _ODP_MEMMODEL_SC) > >> > + > >> > +/* Helper functions for pointer tagging to avoid ABA race conditions > */ > >> > +#define odp_tag(ptr) \ > >> > + (((size_t)ptr) & (TAG_ALIGN - 1)) > >> > + > >> > +#define odp_detag(ptr) \ > >> > + ((typeof(ptr))(((size_t)ptr) & -TAG_ALIGN)) > >> > + > >> > +#define odp_retag(ptr, tag) \ > >> > + ((typeof(ptr))(((size_t)ptr) | odp_tag(tag))) > >> > + > >> > + > >> > +static inline void *get_blk(struct pool_entry_s *pool) > >> > { > >> > - return pool_entry_ptr[pool_id]; > >> > + void *oldhead, *myhead, *newhead; > >> > + > >> > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, > >> > _ODP_MEMMODEL_ACQ); > >> > + > >> > + do { > >> > + size_t tag = odp_tag(oldhead); > >> > + myhead = odp_detag(oldhead); > >> > + if (myhead == NULL) > >> > + break; > >> > + newhead = odp_retag(((odp_buf_blk_t *)myhead)->next, > tag + > >> > 1); > >> > + } while (odp_cs(pool->blk_freelist, oldhead, newhead) == 0); > >> > + > >> > + if (myhead == NULL) { > >> > + odp_atomic_inc_u64(&pool->blkempty); > >> > + } else { > >> > + uint64_t blkcount = > >> > + odp_atomic_fetch_sub_u32(&pool->blkcount, 1); > >> > + > >> > + /* Check for low watermark condition */ > >> > + if (blkcount == pool->low_wm) { > >> > + LOCK(&pool->lock); > >> > + if (blkcount <= pool->low_wm && > >> > + !pool->flags.low_wm_assert) { > >> > + pool->flags.low_wm_assert = 1; > >> > + odp_atomic_inc_u64(&pool->low_wm_count); > >> > + } > >> > + UNLOCK(&pool->lock); > >> > + } > >> > + odp_atomic_inc_u64(&pool->blkallocs); > >> > + } > >> > + > >> > + return (void *)myhead; > >> > } > >> > > >> > +static inline void ret_blk(struct pool_entry_s *pool, void *block) > >> > +{ > >> > + void *oldhead, *myhead, *myblock; > >> > + > >> > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, > >> > _ODP_MEMMODEL_ACQ); > >> > > >> > -static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > >> > + do { > >> > + size_t tag = odp_tag(oldhead); > >> > + myhead = odp_detag(oldhead); > >> > + ((odp_buf_blk_t *)block)->next = myhead; > >> > + myblock = odp_retag(block, tag + 1); > >> > + } while (odp_cs(pool->blk_freelist, oldhead, myblock) == 0); > >> > + > >> > + odp_atomic_inc_u64(&pool->blkfrees); > >> > + uint64_t blkcount = odp_atomic_fetch_add_u32(&pool->blkcount, > 1); > >> > >> Move uint64_t up with next to all the other globaly declared variables > >> for this function. > > > > > > These are not global variables. > > Move the declaration to the top of this function next to the "void > *oldhead,...." > No thank you. > > > > >> > >> > >> > >> Some comments to start with. =) > >> > >> Cheers, > >> Anders > > > > >
Interesting suggestion. I'll look into it. Is this available on all architectures? Thanks. On Wed, Dec 3, 2014 at 7:01 AM, Shmulik Ladkani <shmulik.ladkani@gmail.com> wrote: > On Tue, 2 Dec 2014 13:17:01 -0600 Bill Fischofer < > bill.fischofer@linaro.org> wrote: > > +#define ODP_SEGBITS(x) \ > > + ((x) < 2 ? 1 : \ > > + ((x) < 4 ? 2 : \ > > + ((x) < 8 ? 3 : \ > > + ((x) < 16 ? 4 : \ > > + ((x) < 32 ? 5 : \ > > + ((x) < 64 ? 6 : \ > > + ((x) < 128 ? 7 : \ > > + ((x) < 256 ? 8 : \ > > + ((x) < 512 ? 9 : \ > > + ((x) < 1024 ? 10 : \ > > + ((x) < 2048 ? 11 : \ > > + ((x) < 4096 ? 12 : \ > > + (0/0))))))))))))) > > I assume this is only used in compile time, but still... > Maybe this can be implemented using __builtin_clzl, something like: > > #define ODP_SEGBITS(x) (32 - __builtin_clzl((unsigned long)(x))) > > (replace hardcoded '32' with number of bits within an 'unsigned long'). > > Also note __builtin_clzl(0) is undefined. >
On 12/02/2014 09:17 PM, Bill Fischofer wrote: > struct { > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > uint32_t pool_id:ODP_BUFFER_POOL_BITS; > uint32_t index:ODP_BUFFER_INDEX_BITS; > + uint32_t seg:ODP_BUFFER_SEG_BITS; > +#else > + uint32_t seg:ODP_BUFFER_SEG_BITS; > + uint32_t index:ODP_BUFFER_INDEX_BITS; > + uint32_t pool_id:ODP_BUFFER_POOL_BITS; > +#endif > }; If this data structure never leaves an application, then there is no reason to have separate layout for big/little endian.
It simplified debugging because the LOG output is the same independent of whether I'm testing on big or little endian systems. My mind isn't wired to read byte-reversed output fields. On Wed, Dec 3, 2014 at 7:18 AM, Taras Kondratiuk < taras.kondratiuk@linaro.org> wrote: > On 12/02/2014 09:17 PM, Bill Fischofer wrote: > >> struct { >> +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN >> uint32_t pool_id:ODP_BUFFER_POOL_BITS; >> uint32_t index:ODP_BUFFER_INDEX_BITS; >> + uint32_t seg:ODP_BUFFER_SEG_BITS; >> +#else >> + uint32_t seg:ODP_BUFFER_SEG_BITS; >> + uint32_t index:ODP_BUFFER_INDEX_BITS; >> + uint32_t pool_id:ODP_BUFFER_POOL_BITS; >> +#endif >> }; >> > > If this data structure never leaves an application, then there is no > reason to have separate layout for big/little endian. >
On 2014-12-03 07:05, Bill Fischofer wrote: > On Wed, Dec 3, 2014 at 5:07 AM, Anders Roxell <anders.roxell@linaro.org> > wrote: > > > Hi, > > > > This is the proposed way to break up your patch: > > 1. break circular dependencies > > 2. move inline functions to a new "odp_buffer_inlines.h" file. > > 3. restructuring ODP buffer pool > > 4. odp_buffer_pool_create > > 5. odp_buffer_pool_destroy > > 6. odp_buffer_pool_info > > > > Seriously, what is the benefit of this sort of slicing and dicing? The > goal here is to get the code merged rather than to figure out how to > package it according to some esthetic ideal. Once it's merged nobody is > going to care about any of this. Are you seriously suggesting that the > code is unreviewable unless done this way? There are lots of follow-on > patches that I want to do but these are being delayed trying to get off the > ground. This whole change is "Phase 1". Once that's in, the subsequent > patches will be smaller and more focused, but this is an iterative process. The benefit of "slicing and dicing" is to clearly and easily see what consequences a certain change inflicts on the system. As this patch stands now, it's pretty hard to see what consequences it has. If it was split into the above suggestion, or a version thereof if it's not feasible, the changes and their reasons might become obvious. > > > Further slicing might be doable, but certain things like separating 3 and 4 > is not possible since the current odp_buffer_pool_create() is intimately > tied to the current structure, which is what is being replaced. I consider > 1 and 2 as part of that restructure. The only reason why > odp_buffer_pool_destroy() and odp_buffer_pool_info() can be separated is > because they are new APIs, but again separating them into separate patches > is basically wasted motion here since they are not optional pieces of the > API, but the latest patch does that in the interest of being responsive to > comments. I disagree. This patch breaks odp_example. Perfect time for a lesson, and here we go. The only thing I can say now is, the odp_example works without your three patches, and with your first (humongous) patch it doesn't. I can't be expected to actually inspect the entire patch to figure out what went wrong with *your* code. If it was split up into logically isolated commits, I could at least say "Hey, I bisected your commits and found that commit so and so regressed". At the moment, all I can say is "After applying this patch the odp_example doesn't terminate... I killed it after 2hours and 11 minutes". This isn't something I should have to tell you though, as it should have been tested before the patch went for review. F.ex. the api-change to odp_buffer_pool_create (taking the data-clump params instead of singular arguments), would be a perfectly isolated change that should have it's own commit. Fix the bug and please split up the patch into more manageable pieces. As it stands, logical flow is intermixed with refactoring and is pretty much unreviewable (what do you say?). > > I'd really prefer that we focus on reviewing patch contents rather than > packaging, given that we're building ODP here, not making incremental > changes to an established product. See above. > > > > see more comments inline. > > > > On 2 December 2014 at 22:50, Bill Fischofer <bill.fischofer@linaro.org> > > wrote: > > > > > > > > > On Tue, Dec 2, 2014 at 3:05 PM, Anders Roxell <anders.roxell@linaro.org> > > > wrote: > > >> > > >> prefix this patch with: > > >> api: ... > > >> > > >> On 2014-12-02 13:17, Bill Fischofer wrote: > > >> > Restructure ODP buffer pool internals to support new APIs. > > >> > > >> The comment doesn't add any extra value from the short log. > > >> "Modifys linux-generic, example and test to make them ready for adding > > the > > >> new odp_buffer_pool_create API" > > > > > > > > > The comment is descriptive of what's in the patch. > > > > > >> > > >> > > >> > Implements new odp_buffer_pool_create() API. > > >> > > > >> > Signed-off-by: Bill Fischofer <bill.fischofer@linaro.org> > > >> > --- > > >> > example/generator/odp_generator.c | 19 +- > > >> > example/ipsec/odp_ipsec.c | 57 +- > > >> > example/l2fwd/odp_l2fwd.c | 19 +- > > >> > example/odp_example/odp_example.c | 18 +- > > >> > example/packet/odp_pktio.c | 19 +- > > >> > example/timer/odp_timer_test.c | 13 +- > > >> > .../linux-generic/include/api/odp_buffer_pool.h | 91 ++- > > >> > platform/linux-generic/include/api/odp_config.h | 10 + > > >> > .../linux-generic/include/api/odp_platform_types.h | 9 + > > >> > > >> Group stuff into odp_platform_types.h should be its own patch. > > >> > > > > > > The change to odp_platform_types.h moves typedefs from > > odp_shared_memory.h > > > to break > > > circular dependencies that would otherwise arise. As a result, this is > > not > > > separable from > > > the rest of this patch. > > > > don't agree. > > > > > > > > > > >> > > >> > .../linux-generic/include/api/odp_shared_memory.h | 10 +- > > >> > .../linux-generic/include/odp_buffer_inlines.h | 157 +++++ > > >> > > >> Creating an inline file should be its own patch. > > > > > > > > > No, it's not independent of the rest of these changes. This is a > > > restructuring patch. The rule that > > > you've promoted is that each patch can be applied independently. Trying > > to > > > make this it's own > > > patch wouldn't follow that rule. > > > > Good that you are trying. > > You are saying "ODP buffer pool restructure" in the short log, please > > do that and *only* that in this patch then! > > Do not add new APIs or change existing APIs, only restructure! > > > > > > > >> > > >> > > >> > .../linux-generic/include/odp_buffer_internal.h | 137 ++-- > > >> > .../include/odp_buffer_pool_internal.h | 278 ++++++-- > > >> > .../linux-generic/include/odp_packet_internal.h | 50 +- > > >> > .../linux-generic/include/odp_timer_internal.h | 11 +- > > >> > platform/linux-generic/odp_buffer.c | 31 +- > > >> > platform/linux-generic/odp_buffer_pool.c | 711 > > >> > +++++++++------------ > > >> > platform/linux-generic/odp_packet.c | 41 +- > > >> > platform/linux-generic/odp_queue.c | 1 + > > >> > platform/linux-generic/odp_schedule.c | 20 +- > > >> > platform/linux-generic/odp_timer.c | 3 +- > > >> > test/api_test/odp_timer_ping.c | 19 +- > > >> > test/validation/odp_crypto.c | 43 +- > > >> > test/validation/odp_queue.c | 19 +- > > >> > 24 files changed, 1024 insertions(+), 762 deletions(-) > > >> > create mode 100644 > > platform/linux-generic/include/odp_buffer_inlines.h > > >> > > > >> > > >> [...] > > >> > > >> > diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h > > >> > b/platform/linux-generic/include/api/odp_buffer_pool.h > > >> > index 30b83e0..7022daa 100644 > > >> > --- a/platform/linux-generic/include/api/odp_buffer_pool.h > > >> > +++ b/platform/linux-generic/include/api/odp_buffer_pool.h > > >> > @@ -36,32 +36,101 @@ extern "C" { > > >> > #define ODP_BUFFER_POOL_INVALID 0 > > >> > > > >> > /** > > >> > + * Buffer pool parameters > > >> > + * Used to communicate buffer pool creation options. > > >> > + */ > > >> > +typedef struct odp_buffer_pool_param_t { > > >> > + size_t buf_size; /**< Buffer size in bytes. The maximum > > >> > + number of bytes application will > > >> > > >> "...bytes the application..." > > > > > > > > > The definite article is optional in english grammar here. This level of > > > nit-picking isn't > > > needed. > > > > yes, its a nit that you can fix when you sen version 5 or whatever > > version you will send out. > > > > You misunderstood. As written it's perfectly valid standard English. > English has more than one way of saying things. Are you saying you were > unable to understand the comment? I appreciate that you might have chosen > to write it differently, but you didn't write it. You are the native English spoken person of us, *not* me... =) Excluding the above example, in general grammatical changes may be nits and they should *not* hold up any patch that are trying to go in. However, if one has to redo the patch and send out a new version the nits shall be fixed as well! > > > > > > > >> > > >> > > >> > + store in each buffer. */ > > >> > + size_t buf_align; /**< Minimum buffer alignment in bytes. > > >> > + Valid values are powers of two. Use 0 > > >> > + for default alignment. Default will > > >> > + always be a multiple of 8. */ > > >> > + uint32_t num_bufs; /**< Number of buffers in the pool */ > > >> > + int buf_type; /**< Buffer type */ > > >> > +} odp_buffer_pool_param_t; > > >> > + > > >> > +/** > > >> > * Create a buffer pool > > >> > + * This routine is used to create a buffer pool. It take three > > >> > + * arguments: the optional name of the pool to be created, an > > optional > > >> > shared > > >> > + * memory handle, and a parameter struct that describes the pool to > > be > > >> > + * created. If a name is not specified the result is an anonymous > > pool > > >> > that > > >> > + * cannot be referenced by odp_buffer_pool_lookup(). > > >> > * > > >> > - * @param name Name of the pool (max ODP_BUFFER_POOL_NAME_LEN - > > 1 > > >> > chars) > > >> > - * @param base_addr Pool base address > > >> > - * @param size Pool size in bytes > > >> > - * @param buf_size Buffer size in bytes > > >> > - * @param buf_align Minimum buffer alignment > > >> > - * @param buf_type Buffer type > > >> > + * @param[in] name Name of the pool, max > > ODP_BUFFER_POOL_NAME_LEN-1 > > >> > chars. > > >> > + * May be specified as NULL for anonymous pools. > > >> > * > > >> > - * @return Buffer pool handle > > >> > + * @param[in] shm The shared memory object in which to create > > the > > >> > pool. > > >> > + * Use ODP_SHM_NULL to reserve default memory > > type > > >> > + * for the buffer type. > > >> > + * > > >> > + * @param[in] params Buffer pool parameters. > > >> > + * > > >> > + * @return Buffer pool handle or ODP_BUFFER_POOL_INVALID if call > > >> > failed. > > >> > > >> Should be > > >> @retval Buffer pool handle on success > > >> @retval ODP_BUFFER_POOL_INVALID if call failed 1 (if it can fail list > > the > > >> reasons) > > >> @retval ODP_BUFFER_POOL_INVALID if call failed 2 (if it can fail list > > the > > >> reasons) > > >> @retval ODP_BUFFER_POOL_INVALID if call failed N > > > > > > > > > The documentation is consistent with that used in the rest of the file. > > If > > > we want a doc cleanup patch > > > that should be a separate patch and cover the whole file, not just one > > > routine that would otherwise stand > > > out as an anomaly. I'll be happy to write that after this patch gets > > > merged. > > > > Wasn't this a "ODP buffer pool restructure" patch, I would say that > > this goes under restructure and or maybe it goes under a new patch > > "api: change odp_buffer_pool_create" =) > > > > As soon as this patch is merged I'll submit a patch to change all of the > docs. I'll expect it to be approved quickly. :) You have more or less rewritten the entire file and that is a reason for you to change this in a separate (and isolated!!) patch in this patch set. > > > > > > > > > >> > > >> > > >> > */ > > >> > + > > >> > odp_buffer_pool_t odp_buffer_pool_create(const char *name, > > >> > - void *base_addr, uint64_t size, > > >> > - size_t buf_size, size_t > > >> > buf_align, > > >> > - int buf_type); > > >> > + odp_shm_t shm, > > >> > + odp_buffer_pool_param_t > > *params); > > >> > > > >> > +/** > > >> > + * Destroy a buffer pool previously created by > > odp_buffer_pool_create() > > >> > + * > > >> > + * @param[in] pool Handle of the buffer pool to be destroyed > > >> > + * > > >> > + * @return 0 on Success, -1 on Failure. > > >> > > >> use @retval here as well and list the reasons how it can fail.] > > > > > > > > > Same comment as above. > > > > I'm going to copy you here: > > "Same comment as above." =) > > > > > > > >> > > >> > > >> > + * > > >> > + * @note This routine destroys a previously created buffer pool. This > > >> > call > > >> > + * does not destroy any shared memory object passed to > > >> > + * odp_buffer_pool_create() used to store the buffer pool contents. > > The > > >> > caller > > >> > + * takes responsibility for that. If no shared memory object was > > passed > > >> > as > > >> > + * part of the create call, then this routine will destroy any > > internal > > >> > shared > > >> > + * memory objects associated with the buffer pool. Results are > > >> > undefined if > > >> > + * an attempt is made to destroy a buffer pool that contains > > allocated > > >> > or > > >> > + * otherwise active buffers. > > >> > + */ > > >> > +int odp_buffer_pool_destroy(odp_buffer_pool_t pool); > > >> > > >> This doesn't belong in this patch, belongs in the > > >> odp_buffer_pool_destroy patch. > > >> > > > > > > That patch is for the implementation of the function, as described. > > This is > > > benign here. > > > > > >> > > >> > > > >> > /** > > >> > * Find a buffer pool by name > > >> > * > > >> > - * @param name Name of the pool > > >> > + * @param[in] name Name of the pool > > >> > * > > >> > * @return Buffer pool handle, or ODP_BUFFER_POOL_INVALID if not > > found. > > >> > > >> Fix this. > > > > > > > > > Same comments as above. > > > > > >> > > >> > > >> > + * > > >> > + * @note This routine cannot be used to look up an anonymous pool > > (one > > >> > created > > >> > + * with no name). > > >> > > >> How can I delete an anonymous pool? > > > > > > > > > You can't. This is just implementing what's been specified. If we want > > to > > > change the spec > > > that can be addressed in a follow-on patch. > > > > Ok I didn't know thank you for the explanation. > > > > You're welcome. > > > > > > > > > >> > > >> > > >> > */ > > >> > odp_buffer_pool_t odp_buffer_pool_lookup(const char *name); > > >> > > > >> > +/** > > >> > + * Buffer pool information struct > > >> > + * Used to get information about a buffer pool. > > >> > + */ > > >> > +typedef struct odp_buffer_pool_info_t { > > >> > + const char *name; /**< pool name */ > > >> > + odp_buffer_pool_param_t params; /**< pool parameters */ > > >> > +} odp_buffer_pool_info_t; > > >> > + > > >> > +/** > > >> > + * Retrieve information about a buffer pool > > >> > + * > > >> > + * @param[in] pool Buffer pool handle > > >> > + * > > >> > + * @param[out] shm Recieves odp_shm_t supplied by caller at > > >> > + * pool creation, or ODP_SHM_NULL if the > > >> > + * pool is managed internally. > > >> > + * > > >> > + * @param[out] info Receives an odp_buffer_pool_info_t object > > >> > + * that describes the pool. > > >> > + * > > >> > + * @return 0 on success, -1 if info could not be retrieved. > > >> > > >> Fix > > > > > > > > > Same doc comments as above. > > > > > >> > > >> > > >> > + */ > > >> > + > > >> > +int odp_buffer_pool_info(odp_buffer_pool_t pool, odp_shm_t *shm, > > >> > + odp_buffer_pool_info_t *info); > > >> > > >> This doesn't belong in this patch, belongs in the > > >> odp_buffer_pool_info patch. > > >> > > >> Again, the separate patch implements these functions. These are benign. > > > > > > > > >> > > >> > > > >> > /** > > >> > * Print buffer pool info > > >> > diff --git a/platform/linux-generic/include/api/odp_config.h > > >> > b/platform/linux-generic/include/api/odp_config.h > > >> > index 906897c..1226d37 100644 > > >> > --- a/platform/linux-generic/include/api/odp_config.h > > >> > +++ b/platform/linux-generic/include/api/odp_config.h > > >> > @@ -49,6 +49,16 @@ extern "C" { > > >> > #define ODP_CONFIG_PKTIO_ENTRIES 64 > > >> > > > >> > /** > > >> > + * Segment size to use - > > >> > > >> What does "-" mean? > > >> Can you elaborate more on this? > > > > > > > > > It's a stray character. > > > > gah, I'm sorry for beeing unclear. > > I meant "-" remove! > > > > and can you elaborate more and not only say "Segment size to use". > > > > > > > >> > > >> > > >> > + */ > > >> > +#define ODP_CONFIG_BUF_SEG_SIZE (512*3) > > >> > + > > >> > +/** > > >> > + * Maximum buffer size supported > > >> > + */ > > >> > +#define ODP_CONFIG_BUF_MAX_SIZE (ODP_CONFIG_BUF_SEG_SIZE*7) > > >> > > >> Isn't this platform specific? > > > > > > > > > Yes, and this is platform/linux-generic. I've chosen this for now > > because > > > the current linux-generic > > > packet I/O doesn't support scatter/gather reads/writes. > > > > Bill I know this is linux-generic, I was unclear again. > > > > Why do you place this in odp_config.h and not in odp_platform_types.h? > > > > odp_platform_types.h is for typedefs. These are implementation limits, > like number of buffer pools we > support, etc. It's the proper file for these sort of limits since you can > change variables here and get a > different configuration of linux-generic. > > > > > > > > > >> > > >> > > >> > + > > >> > +/** > > >> > * @} > > >> > */ > > >> > > > >> > diff --git a/platform/linux-generic/include/api/odp_platform_types.h > > >> > b/platform/linux-generic/include/api/odp_platform_types.h > > >> > index 4db47d3..b9b3aea 100644 > > >> > --- a/platform/linux-generic/include/api/odp_platform_types.h > > >> > +++ b/platform/linux-generic/include/api/odp_platform_types.h > > >> > @@ -65,6 +65,15 @@ typedef uint32_t odp_pktio_t; > > >> > #define ODP_PKTIO_ANY ((odp_pktio_t)~0) > > >> > > > >> > /** > > >> > + * ODP shared memory block > > >> > + */ > > >> > +typedef uint32_t odp_shm_t; > > >> > + > > >> > +/** Invalid shared memory block */ > > >> > +#define ODP_SHM_INVALID 0 > > >> > +#define ODP_SHM_NULL ODP_SHM_INVALID /**< Synonym for buffer pool use > > >> > */ > > >> > > >> ODP_SHM_* touches shm functionality and should be in its own patch to > > >> fix/move it. > > > > > > > > > Already discussed above. > > >> > > >> > > >> > + > > >> > +/** > > >> > * @} > > >> > */ > > >> > > > >> > diff --git a/platform/linux-generic/include/api/odp_shared_memory.h > > >> > b/platform/linux-generic/include/api/odp_shared_memory.h > > >> > index 26e208b..f70db5a 100644 > > >> > --- a/platform/linux-generic/include/api/odp_shared_memory.h > > >> > +++ b/platform/linux-generic/include/api/odp_shared_memory.h > > >> > @@ -20,6 +20,7 @@ extern "C" { > > >> > > > >> > > > >> > #include <odp_std_types.h> > > >> > +#include <odp_platform_types.h> > > >> > > >> Not relevant for the odp_buffer_pool_create > > > > > > > > > Incorrect. It is part of the restructure for reasons discussed above. > > > > OK, for restructuring but not for odp_buffer_pool_create =) > > > > > > > >> > > >> > > >> > > > >> > /** @defgroup odp_shared_memory ODP SHARED MEMORY > > >> > * Operations on shared memory. > > >> > @@ -38,15 +39,6 @@ extern "C" { > > >> > #define ODP_SHM_PROC 0x2 /**< Share with external processes */ > > >> > > > >> > /** > > >> > - * ODP shared memory block > > >> > - */ > > >> > -typedef uint32_t odp_shm_t; > > >> > - > > >> > -/** Invalid shared memory block */ > > >> > -#define ODP_SHM_INVALID 0 > > >> > - > > >> > - > > >> > -/** > > >> > * Shared memory block info > > >> > */ > > >> > typedef struct odp_shm_info_t { > > >> > diff --git a/platform/linux-generic/include/odp_buffer_inlines.h > > >> > b/platform/linux-generic/include/odp_buffer_inlines.h > > >> > new file mode 100644 > > >> > index 0000000..f33b41d > > >> > --- /dev/null > > >> > +++ b/platform/linux-generic/include/odp_buffer_inlines.h > > >> > @@ -0,0 +1,157 @@ > > >> > +/* Copyright (c) 2014, Linaro Limited > > >> > + * All rights reserved. > > >> > + * > > >> > + * SPDX-License-Identifier: BSD-3-Clause > > >> > + */ > > >> > + > > >> > +/** > > >> > + * @file > > >> > + * > > >> > + * Inline functions for ODP buffer mgmt routines - implementation > > >> > internal > > >> > + */ > > >> > + > > >> > +#ifndef ODP_BUFFER_INLINES_H_ > > >> > +#define ODP_BUFFER_INLINES_H_ > > >> > + > > >> > +#ifdef __cplusplus > > >> > +extern "C" { > > >> > +#endif > > >> > + > > >> > +static inline odp_buffer_t odp_buffer_encode_handle(odp_buffer_hdr_t > > >> > *hdr) > > >> > +{ > > >> > + odp_buffer_bits_t handle; > > >> > + uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl); > > >> > + struct pool_entry_s *pool = get_pool_entry(pool_id); > > >> > + > > >> > + handle.pool_id = pool_id; > > >> > + handle.index = ((uint8_t *)hdr - pool->pool_base_addr) / > > >> > + ODP_CACHE_LINE_SIZE; > > >> > + handle.seg = 0; > > >> > + > > >> > + return handle.u32; > > >> > +} > > >> > + > > >> > +static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) > > >> > +{ > > >> > + odp_buffer_t hdl = odp_buffer_encode_handle(hdr); > > >> > + if (hdl != hdr->handle.handle) { > > >> > + ODP_DBG("buf %p should have handle %x but is cached as > > >> > %x\n", > > >> > + hdr, hdl, hdr->handle.handle); > > >> > + hdr->handle.handle = hdl; > > >> > + } > > >> > + return hdr->handle.handle; > > >> > +} > > >> > + > > >> > +static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > > >> > +{ > > >> > + odp_buffer_bits_t handle; > > >> > + uint32_t pool_id; > > >> > + uint32_t index; > > >> > + struct pool_entry_s *pool; > > >> > + > > >> > + handle.u32 = buf; > > >> > + pool_id = handle.pool_id; > > >> > + index = handle.index; > > >> > + > > >> > +#ifdef POOL_ERROR_CHECK > > >> > + if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { > > >> > + ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); > > >> > + return NULL; > > >> > + } > > >> > +#endif > > >> > + > > >> > + pool = get_pool_entry(pool_id); > > >> > + > > >> > +#ifdef POOL_ERROR_CHECK > > >> > + if (odp_unlikely(index > pool->params.num_bufs - 1)) { > > >> > + ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); > > >> > + return NULL; > > >> > + } > > >> > +#endif > > >> > + > > >> > + return (odp_buffer_hdr_t *)(void *) > > >> > + (pool->pool_base_addr + (index * ODP_CACHE_LINE_SIZE)); > > >> > +} > > >> > + > > >> > +static inline uint32_t odp_buffer_refcount(odp_buffer_hdr_t *buf) > > >> > +{ > > >> > + return odp_atomic_load_u32(&buf->ref_count); > > >> > +} > > >> > + > > >> > +static inline uint32_t odp_buffer_incr_refcount(odp_buffer_hdr_t > > *buf, > > >> > + uint32_t val) > > >> > +{ > > >> > + return odp_atomic_fetch_add_u32(&buf->ref_count, val) + val; > > >> > +} > > >> > + > > >> > +static inline uint32_t odp_buffer_decr_refcount(odp_buffer_hdr_t > > *buf, > > >> > + uint32_t val) > > >> > +{ > > >> > + uint32_t tmp; > > >> > + > > >> > + tmp = odp_atomic_fetch_sub_u32(&buf->ref_count, val); > > >> > + > > >> > + if (tmp < val) { > > >> > + odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); > > >> > + return 0; > > >> > + } else { > > >> > > >> drop the else statement > > > > > > > > > That would be erroneous code. Refcounts don't go below 0. This code > > > ensures that. > > > > Bill, I was unclear again. > > I thought you understood that I meant only remove "else" and move out > > return on tab! > > > > like this: > > > > if (tmp < val) { > > odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); > > return 0; > > } > > return tmp - val; > > > > De gustibus non est disputandum. > > Actually, having the else makes the code clearer and less prone to error > introduction in future updates. I'm sure you'll agree that there is no > performance difference between the two. That may be, but it's established practice in our code base, and having a de facto "coding standard" makes code *familiar* and thus easier to read. As we have no formalized coding standard, this could probably be changed, but that's another discussion and another patch over the entire code base. > > > > > > > > > > >> > > >> > > >> > + return tmp - val; > > >> > + } > > >> > +} > > >> > + > > >> > +static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) > > >> > +{ > > >> > + odp_buffer_bits_t handle; > > >> > + odp_buffer_hdr_t *buf_hdr; > > >> > + handle.u32 = buf; > > >> > + > > >> > + /* For buffer handles, segment index must be 0 */ > > >> > > >> Why does the buffer handle always have to have a segment index that must > > >> be 0? > > > > > > > > > Because that's how I've defined it in this implementation. > > > validate_buffer() can be > > > given any 32-bit value and it will robustly say whether or not it is a > > valid > > > buffer handle. > > > > hmm... OK, I will look again > > > > > > > >> > > >> > > >> > + if (handle.seg != 0) > > >> > + return NULL; > > >> > > >> Why do we need to check everything? > > >> shouldn't we trust our internal stuff to be sent correctly? > > >> Maybe it should be an ODP_ASSERT? > > > > > > > > > No, odp_buffer_is_valid() does not assert. It returns a yes/no value for > > > any > > > input value. > > > > > >> > > >> > > >> > + > > >> > + pool_entry_t *pool = odp_pool_to_entry(handle.pool_id); > > >> > + > > >> > + /* If pool not created, handle is invalid */ > > >> > + if (pool->s.pool_shm == ODP_SHM_INVALID) > > >> > + return NULL; > > >> > > >> The same applies here. > > > > > > > > > Same answer. > > > > > >> > > >> > > >> > + > > >> > + uint32_t buf_stride = pool->s.buf_stride / ODP_CACHE_LINE_SIZE; > > >> > + > > >> > + /* A valid buffer index must be on stride, and must be in range > > */ > > >> > + if ((handle.index % buf_stride != 0) || > > >> > + ((uint32_t)(handle.index / buf_stride) >= > > >> > pool->s.params.num_bufs)) > > >> > + return NULL; > > >> > + > > >> > + buf_hdr = (odp_buffer_hdr_t *)(void *) > > >> > + (pool->s.pool_base_addr + > > >> > + (handle.index * ODP_CACHE_LINE_SIZE)); > > >> > + > > >> > + /* Handle is valid, so buffer is valid if it is allocated */ > > >> > + if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) > > >> > + return NULL; > > >> > + else > > >> > > >> Drop the else > > > > > > > > > No, that would be erroneous. A buffer handle is no longer valid if > > > the buffer has been freed. That's what's being checked here. > > > > again: > > /* Handle is valid, so buffer is valid if it is allocated */ > > if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) > > return NULL; > > return buf_hdr; > > > > Same comment as above. If I didn't think having the else here was clearer > I would not have written the code that way. The style passes checkpatch, > which should be sufficient for reviewers. See above regarding coding style. > > > > > > > > > >> > > >> > > >> > + return buf_hdr; > > >> > +} > > >> > + > > >> > +int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > > >> > + > > >> > +static inline void *buffer_map(odp_buffer_hdr_t *buf, > > >> > + size_t offset, > > >> > + size_t *seglen, > > >> > + size_t limit) > > >> > +{ > > >> > + int seg_index = offset / buf->segsize; > > >> > + int seg_offset = offset % buf->segsize; > > >> > + size_t buf_left = limit - offset; > > >> > + > > >> > + *seglen = buf_left < buf->segsize ? > > >> > + buf_left : buf->segsize - seg_offset; > > >> > + > > >> > + return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]); > > >> > +} > > >> > + > > >> > +#ifdef __cplusplus > > >> > +} > > >> > +#endif > > >> > + > > >> > +#endif > > >> > diff --git a/platform/linux-generic/include/odp_buffer_internal.h > > >> > b/platform/linux-generic/include/odp_buffer_internal.h > > >> > index 0027bfc..29666db 100644 > > >> > --- a/platform/linux-generic/include/odp_buffer_internal.h > > >> > +++ b/platform/linux-generic/include/odp_buffer_internal.h > > >> > @@ -24,99 +24,118 @@ extern "C" { > > >> > #include <odp_buffer.h> > > >> > #include <odp_debug.h> > > >> > #include <odp_align.h> > > >> > - > > >> > -/* TODO: move these to correct files */ > > >> > - > > >> > -typedef uint64_t odp_phys_addr_t; > > >> > - > > >> > -#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > > >> > -#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > > >> > - > > >> > -#define ODP_BUFS_PER_CHUNK 16 > > >> > -#define ODP_BUFS_PER_SCATTER 4 > > >> > - > > >> > -#define ODP_BUFFER_TYPE_CHUNK 0xffff > > >> > - > > >> > +#include <odp_config.h> > > >> > +#include <odp_byteorder.h> > > >> > +#include <odp_thread.h> > > >> > + > > >> > + > > >> > +#define ODP_BUFFER_MAX_SEG > > >> > (ODP_CONFIG_BUF_MAX_SIZE/ODP_CONFIG_BUF_SEG_SIZE) > > >> > +#define ODP_MAX_INLINE_BUF (sizeof(void *) * (ODP_BUFFER_MAX_SEG > > - > > >> > 1)) > > >> > + > > >> > +ODP_STATIC_ASSERT((ODP_CONFIG_BUF_SEG_SIZE % ODP_CACHE_LINE_SIZE) == > > 0, > > >> > + "ODP Segment size must be a multiple of cache line > > >> > size"); > > >> > + > > >> > +#define ODP_SEGBITS(x) \ > > >> > + ((x) < 2 ? 1 : \ > > >> > + ((x) < 4 ? 2 : \ > > >> > + ((x) < 8 ? 3 : \ > > >> > + ((x) < 16 ? 4 : \ > > >> > + ((x) < 32 ? 5 : \ > > >> > + ((x) < 64 ? 6 : \ > > >> > > >> Do you need to add the tab "6 :<tab>\" > > > > > > > > > I'm not sure I understand the comment. > > > > fix your editor please! > > > > I'm using emacs with style = linux. You have a tab-character instead of a space before the last backslash on the line I specified. The "fix your editor" comment means you should find a way to visualize whitespace characters in your editor so they're obvious. > > > > > > > > > >> > > >> > > >> > + ((x) < 128 ? 7 : \ > > >> > + ((x) < 256 ? 8 : \ > > >> > + ((x) < 512 ? 9 : \ > > >> > + ((x) < 1024 ? 10 : \ > > >> > + ((x) < 2048 ? 11 : \ > > >> > + ((x) < 4096 ? 12 : \ > > >> > + (0/0))))))))))))) > > >> > + > > >> > +ODP_STATIC_ASSERT(ODP_SEGBITS(ODP_BUFFER_MAX_SEG) < > > >> > + ODP_SEGBITS(ODP_CACHE_LINE_SIZE), > > >> > + "Number of segments must not exceed log of cache line > > >> > size"); > > >> > > > >> > #define ODP_BUFFER_POOL_BITS 4 > > >> > -#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS) > > >> > +#define ODP_BUFFER_SEG_BITS ODP_SEGBITS(ODP_CACHE_LINE_SIZE) > > >> > +#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS - > > >> > ODP_BUFFER_SEG_BITS) > > >> > +#define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS + > > >> > ODP_BUFFER_INDEX_BITS) > > >> > #define ODP_BUFFER_MAX_POOLS (1 << ODP_BUFFER_POOL_BITS) > > >> > #define ODP_BUFFER_MAX_BUFFERS (1 << ODP_BUFFER_INDEX_BITS) > > >> > > > >> > +#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > > >> > +#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > > >> > + > > >> > typedef union odp_buffer_bits_t { > > >> > uint32_t u32; > > >> > odp_buffer_t handle; > > >> > > > >> > struct { > > >> > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > > >> > uint32_t pool_id:ODP_BUFFER_POOL_BITS; > > >> > uint32_t index:ODP_BUFFER_INDEX_BITS; > > >> > + uint32_t seg:ODP_BUFFER_SEG_BITS; > > >> > +#else > > >> > + uint32_t seg:ODP_BUFFER_SEG_BITS; > > >> > + uint32_t index:ODP_BUFFER_INDEX_BITS; > > >> > + uint32_t pool_id:ODP_BUFFER_POOL_BITS; > > >> > +#endif > > >> > > >> and this will work on 64bit platforms? > > > > > > > > > Yes. I'm developing on a 64-bit platform. > > > > OK > > > > > > > >> > > >> > > >> > }; > > >> > -} odp_buffer_bits_t; > > >> > > > >> > + struct { > > >> > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > > >> > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > > >> > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > > >> > +#else > > >> > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > > >> > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > > >> > +#endif > > >> > + }; > > >> > +} odp_buffer_bits_t; > > >> > > > >> > /* forward declaration */ > > >> > struct odp_buffer_hdr_t; > > >> > > > >> > - > > >> > -/* > > >> > - * Scatter/gather list of buffers > > >> > - */ > > >> > -typedef struct odp_buffer_scatter_t { > > >> > - /* buffer pointers */ > > >> > - struct odp_buffer_hdr_t *buf[ODP_BUFS_PER_SCATTER]; > > >> > - int num_bufs; /* num buffers */ > > >> > - int pos; /* position on the list */ > > >> > - size_t total_len; /* Total length */ > > >> > -} odp_buffer_scatter_t; > > >> > - > > >> > - > > >> > -/* > > >> > - * Chunk of buffers (in single pool) > > >> > - */ > > >> > -typedef struct odp_buffer_chunk_t { > > >> > - uint32_t num_bufs; /* num buffers */ > > >> > - uint32_t buf_index[ODP_BUFS_PER_CHUNK]; /* buffers */ > > >> > -} odp_buffer_chunk_t; > > >> > - > > >> > - > > >> > /* Common buffer header */ > > >> > typedef struct odp_buffer_hdr_t { > > >> > struct odp_buffer_hdr_t *next; /* next buf in a list */ > > >> > + int allocator; /* allocating thread id */ > > >> > odp_buffer_bits_t handle; /* handle */ > > >> > - odp_phys_addr_t phys_addr; /* physical data start > > >> > address */ > > >> > - void *addr; /* virtual data start > > address > > >> > */ > > >> > - uint32_t index; /* buf index in the pool */ > > >> > + union { > > >> > + uint32_t all; > > >> > + struct { > > >> > + uint32_t zeroized:1; /* Zeroize buf data on free > > >> > */ > > >> > + uint32_t hdrdata:1; /* Data is in buffer hdr */ > > >> > + }; > > >> > + } flags; > > >> > + int type; /* buffer type */ > > >> > size_t size; /* max data size */ > > >> > - size_t cur_offset; /* current offset */ > > >> > odp_atomic_u32_t ref_count; /* reference count */ > > >> > - odp_buffer_scatter_t scatter; /* Scatter/gather list */ > > >> > - int type; /* type of next header */ > > >> > odp_buffer_pool_t pool_hdl; /* buffer pool handle */ > > >> > - > > >> > + union { > > >> > + void *buf_ctx; /* user context */ > > >> > + void *udata_addr; /* user metadata addr */ > > >> > + }; > > >> > + size_t udata_size; /* size of user metadata */ > > >> > + uint32_t segcount; /* segment count */ > > >> > + uint32_t segsize; /* segment size */ > > >> > + void *addr[ODP_BUFFER_MAX_SEG]; /* block > > addrs > > >> > */ > > >> > } odp_buffer_hdr_t; > > >> > > > >> > -/* Ensure next header starts from 8 byte align */ > > >> > -ODP_STATIC_ASSERT((sizeof(odp_buffer_hdr_t) % 8) == 0, > > >> > "ODP_BUFFER_HDR_T__SIZE_ERROR"); > > >> > +typedef struct odp_buffer_hdr_stride { > > >> > + uint8_t > > >> > pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t))]; > > >> > +} odp_buffer_hdr_stride; > > >> > > > >> > +typedef struct odp_buf_blk_t { > > >> > + struct odp_buf_blk_t *next; > > >> > + struct odp_buf_blk_t *prev; > > >> > +} odp_buf_blk_t; > > >> > > > >> > /* Raw buffer header */ > > >> > typedef struct { > > >> > odp_buffer_hdr_t buf_hdr; /* common buffer header */ > > >> > - uint8_t buf_data[]; /* start of buffer data area */ > > >> > } odp_raw_buffer_hdr_t; > > >> > > > >> > - > > >> > -/* Chunk header */ > > >> > -typedef struct odp_buffer_chunk_hdr_t { > > >> > - odp_buffer_hdr_t buf_hdr; > > >> > - odp_buffer_chunk_t chunk; > > >> > -} odp_buffer_chunk_hdr_t; > > >> > - > > >> > - > > >> > -int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > > >> > - > > >> > -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t > > >> > buf_src); > > >> > - > > >> > +/* Forward declarations */ > > >> > +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool, size_t size); > > >> > > > >> > #ifdef __cplusplus > > >> > } > > >> > diff --git a/platform/linux-generic/include/odp_buffer_pool_internal.h > > >> > b/platform/linux-generic/include/odp_buffer_pool_internal.h > > >> > index e0210bd..cd58f91 100644 > > >> > --- a/platform/linux-generic/include/odp_buffer_pool_internal.h > > >> > +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h > > >> > @@ -25,6 +25,35 @@ extern "C" { > > >> > #include <odp_hints.h> > > >> > #include <odp_config.h> > > >> > #include <odp_debug.h> > > >> > +#include <odp_shared_memory.h> > > >> > +#include <odp_atomic.h> > > >> > +#include <odp_atomic_internal.h> > > >> > +#include <string.h> > > >> > + > > >> > +/** > > >> > + * Buffer initialization routine prototype > > >> > + * > > >> > + * @note Routines of this type MAY be passed as part of the > > >> > + * _odp_buffer_pool_init_t structure to be called whenever a > > >> > + * buffer is allocated to initialize the user metadata > > >> > + * associated with that buffer. > > >> > + */ > > >> > +typedef void (_odp_buf_init_t)(odp_buffer_t buf, void *buf_init_arg); > > >> > + > > >> > +/** > > >> > + * Buffer pool initialization parameters > > >> > + * > > >> > + * @param[in] udata_size Size of the user metadata for each > > buffer > > >> > + * @param[in] buf_init Function pointer to be called to > > >> > initialize the > > >> > + * user metadata for each buffer in the > > pool. > > >> > + * @param[in] buf_init_arg Argument to be passed to buf_init(). > > >> > + * > > >> > + */ > > >> > +typedef struct _odp_buffer_pool_init_t { > > >> > + size_t udata_size; /**< Size of user metadata for each > > >> > buffer */ > > >> > + _odp_buf_init_t *buf_init; /**< Buffer initialization routine to > > >> > use */ > > >> > + void *buf_init_arg; /**< Argument to be passed to > > >> > buf_init() */ > > >> > +} _odp_buffer_pool_init_t; /**< Type of buffer initialization > > >> > struct */ > > >> > > > >> > /* Use ticketlock instead of spinlock */ > > >> > #define POOL_USE_TICKETLOCK > > >> > @@ -39,6 +68,17 @@ extern "C" { > > >> > #include <odp_spinlock.h> > > >> > #endif > > >> > > > >> > +#ifdef POOL_USE_TICKETLOCK > > >> > +#include <odp_ticketlock.h> > > >> > +#define LOCK(a) odp_ticketlock_lock(a) > > >> > +#define UNLOCK(a) odp_ticketlock_unlock(a) > > >> > +#define LOCK_INIT(a) odp_ticketlock_init(a) > > >> > +#else > > >> > +#include <odp_spinlock.h> > > >> > +#define LOCK(a) odp_spinlock_lock(a) > > >> > +#define UNLOCK(a) odp_spinlock_unlock(a) > > >> > +#define LOCK_INIT(a) odp_spinlock_init(a) > > >> > +#endif > > >> > > > >> > struct pool_entry_s { > > >> > #ifdef POOL_USE_TICKETLOCK > > >> > @@ -47,66 +87,224 @@ struct pool_entry_s { > > >> > odp_spinlock_t lock ODP_ALIGNED_CACHE; > > >> > #endif > > >> > > > >> > - odp_buffer_chunk_hdr_t *head; > > >> > - uint64_t free_bufs; > > >> > char name[ODP_BUFFER_POOL_NAME_LEN]; > > >> > - > > >> > - odp_buffer_pool_t pool_hdl ODP_ALIGNED_CACHE; > > >> > - uintptr_t buf_base; > > >> > - size_t buf_size; > > >> > - size_t buf_offset; > > >> > - uint64_t num_bufs; > > >> > - void *pool_base_addr; > > >> > - uint64_t pool_size; > > >> > - size_t user_size; > > >> > - size_t user_align; > > >> > - int buf_type; > > >> > - size_t hdr_size; > > >> > + odp_buffer_pool_param_t params; > > >> > + _odp_buffer_pool_init_t init_params; > > >> > + odp_buffer_pool_t pool_hdl; > > >> > + odp_shm_t pool_shm; > > >> > + union { > > >> > + uint32_t all; > > >> > + struct { > > >> > + uint32_t has_name:1; > > >> > + uint32_t user_supplied_shm:1; > > >> > + uint32_t unsegmented:1; > > >> > + uint32_t zeroized:1; > > >> > + uint32_t quiesced:1; > > >> > + uint32_t low_wm_assert:1; > > >> > + uint32_t predefined:1; > > >> > + }; > > >> > + } flags; > > >> > + uint8_t *pool_base_addr; > > >> > + size_t pool_size; > > >> > + uint32_t buf_stride; > > >> > + _odp_atomic_ptr_t buf_freelist; > > >> > + _odp_atomic_ptr_t blk_freelist; > > >> > + odp_atomic_u32_t bufcount; > > >> > + odp_atomic_u32_t blkcount; > > >> > + odp_atomic_u64_t bufallocs; > > >> > + odp_atomic_u64_t buffrees; > > >> > + odp_atomic_u64_t blkallocs; > > >> > + odp_atomic_u64_t blkfrees; > > >> > + odp_atomic_u64_t bufempty; > > >> > + odp_atomic_u64_t blkempty; > > >> > + odp_atomic_u64_t high_wm_count; > > >> > + odp_atomic_u64_t low_wm_count; > > >> > + size_t seg_size; > > >> > + size_t high_wm; > > >> > + size_t low_wm; > > >> > + size_t headroom; > > >> > + size_t tailroom; > > >> > > >> General comment add the same level of information into the variable > > >> names. > > >> > > >> Not consistent use "_" used to separate words in variable names. > > >> > > > > > > These are internal structs. Not relevant. > > > > so you mean that we shouldn't review internal code and > > that its OK to bi inconsistent because its internal code? > > > > I don't follow you here. You don't like the choice of variable names in > the struct? See above regarding coding style. It doesn't really matter much if the struct is for internal use only. *We* are the ones that have to maintain the code, collectively, thus familiarity is key. Variables are named a certain way throughout the entire code base, why have this single struct be different? > > > > > > > > > >> > > >> > > >> > > >> > }; > > >> > > > >> > +typedef union pool_entry_u { > > >> > + struct pool_entry_s s; > > >> > + > > >> > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct > > >> > pool_entry_s))]; > > >> > +} pool_entry_t; > > >> > > > >> > extern void *pool_entry_ptr[]; > > >> > > > >> > +#if defined(ODP_CONFIG_SECURE_POOLS) && (ODP_CONFIG_SECURE_POOLS == > > 1) > > >> > +#define buffer_is_secure(buf) (buf->flags.zeroized) > > >> > +#define pool_is_secure(pool) (pool->flags.zeroized) > > >> > +#else > > >> > +#define buffer_is_secure(buf) 0 > > >> > +#define pool_is_secure(pool) 0 > > >> > +#endif > > >> > + > > >> > +#define TAG_ALIGN ((size_t)16) > > >> > > > >> > -static inline void *get_pool_entry(uint32_t pool_id) > > >> > +#define odp_cs(ptr, old, new) \ > > >> > + _odp_atomic_ptr_cmp_xchg_strong(&ptr, (void **)&old, (void > > *)new, > > >> > \ > > >> > + _ODP_MEMMODEL_SC, \ > > >> > + _ODP_MEMMODEL_SC) > > >> > + > > >> > +/* Helper functions for pointer tagging to avoid ABA race conditions > > */ > > >> > +#define odp_tag(ptr) \ > > >> > + (((size_t)ptr) & (TAG_ALIGN - 1)) > > >> > + > > >> > +#define odp_detag(ptr) \ > > >> > + ((typeof(ptr))(((size_t)ptr) & -TAG_ALIGN)) > > >> > + > > >> > +#define odp_retag(ptr, tag) \ > > >> > + ((typeof(ptr))(((size_t)ptr) | odp_tag(tag))) > > >> > + > > >> > + > > >> > +static inline void *get_blk(struct pool_entry_s *pool) > > >> > { > > >> > - return pool_entry_ptr[pool_id]; > > >> > + void *oldhead, *myhead, *newhead; > > >> > + > > >> > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, > > >> > _ODP_MEMMODEL_ACQ); > > >> > + > > >> > + do { > > >> > + size_t tag = odp_tag(oldhead); > > >> > + myhead = odp_detag(oldhead); > > >> > + if (myhead == NULL) > > >> > + break; > > >> > + newhead = odp_retag(((odp_buf_blk_t *)myhead)->next, > > tag + > > >> > 1); > > >> > + } while (odp_cs(pool->blk_freelist, oldhead, newhead) == 0); > > >> > + > > >> > + if (myhead == NULL) { > > >> > + odp_atomic_inc_u64(&pool->blkempty); > > >> > + } else { > > >> > + uint64_t blkcount = > > >> > + odp_atomic_fetch_sub_u32(&pool->blkcount, 1); > > >> > + > > >> > + /* Check for low watermark condition */ > > >> > + if (blkcount == pool->low_wm) { > > >> > + LOCK(&pool->lock); > > >> > + if (blkcount <= pool->low_wm && > > >> > + !pool->flags.low_wm_assert) { > > >> > + pool->flags.low_wm_assert = 1; > > >> > + odp_atomic_inc_u64(&pool->low_wm_count); > > >> > + } > > >> > + UNLOCK(&pool->lock); > > >> > + } > > >> > + odp_atomic_inc_u64(&pool->blkallocs); > > >> > + } > > >> > + > > >> > + return (void *)myhead; > > >> > } > > >> > > > >> > +static inline void ret_blk(struct pool_entry_s *pool, void *block) > > >> > +{ > > >> > + void *oldhead, *myhead, *myblock; > > >> > + > > >> > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, > > >> > _ODP_MEMMODEL_ACQ); > > >> > > > >> > -static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > > >> > + do { > > >> > + size_t tag = odp_tag(oldhead); > > >> > + myhead = odp_detag(oldhead); > > >> > + ((odp_buf_blk_t *)block)->next = myhead; > > >> > + myblock = odp_retag(block, tag + 1); > > >> > + } while (odp_cs(pool->blk_freelist, oldhead, myblock) == 0); > > >> > + > > >> > + odp_atomic_inc_u64(&pool->blkfrees); > > >> > + uint64_t blkcount = odp_atomic_fetch_add_u32(&pool->blkcount, > > 1); > > >> > > >> Move uint64_t up with next to all the other globaly declared variables > > >> for this function. > > > > > > > > > These are not global variables. > > > > Move the declaration to the top of this function next to the "void > > *oldhead,...." > > > > No thank you. Again, see comment above regarding coding style! What's the reason for disregarding the coding style of the entire project? By the way you have started this way of declare variables in the middle of a function in a lot of places in this commit, so please fix those as well. Cheers, Anders
What problem do you see with odp_example.c with part 1 of the patch applied that disappears when you apply the remaining two parts? The code compiles and runs the same for me. I had previously mentioned that odp_example.c has a race condition (bug) that is exacerbated by the fact that the restructured code is lock-free so the example code may fail more frequently. When that happens I just re-run it and with an attempt or two it will usually pass. The fact that a patch exposes a latent bug in another program shouldn't be held against the patch. :) You needn't try to debug code via bisection or anything else as part of a review as reviews do not replace normal testing. If you run into a problem it's sufficient to note that and let the submitter investigate. My understanding is that the purpose of reviews is to spot things that normal testing may not detect. That adds real value. Agreed-to coding style is encoded in checkpatch. If checkpatch says the style is OK that should be sufficient. If it isn't, then checkpatch should be changed to add those additional checks. There's no point in having a tool like checkpatch if each reviewer is then going to add their own subjective additions to it as part of their reviews. If you take a look at the review I posted of Bala's classification patch, that's the sort of review I think adds value to the process. We're all programmers and can read code. Whether or not I'd choose to write some code the same way as the submitter is irrelevant to my review. I might offer suggestions that may improve clarity, but throwing yellow flags because I didn't write the code is not constructive. On Wed, Dec 3, 2014 at 2:35 PM, Anders Roxell <anders.roxell@linaro.org> wrote: > On 2014-12-03 07:05, Bill Fischofer wrote: > > On Wed, Dec 3, 2014 at 5:07 AM, Anders Roxell <anders.roxell@linaro.org> > > wrote: > > > > > Hi, > > > > > > This is the proposed way to break up your patch: > > > 1. break circular dependencies > > > 2. move inline functions to a new "odp_buffer_inlines.h" file. > > > 3. restructuring ODP buffer pool > > > 4. odp_buffer_pool_create > > > 5. odp_buffer_pool_destroy > > > 6. odp_buffer_pool_info > > > > > > > Seriously, what is the benefit of this sort of slicing and dicing? The > > goal here is to get the code merged rather than to figure out how to > > package it according to some esthetic ideal. Once it's merged nobody is > > going to care about any of this. Are you seriously suggesting that the > > code is unreviewable unless done this way? There are lots of follow-on > > patches that I want to do but these are being delayed trying to get off > the > > ground. This whole change is "Phase 1". Once that's in, the subsequent > > patches will be smaller and more focused, but this is an iterative > process. > > The benefit of "slicing and dicing" is to clearly and easily see what > consequences a certain change inflicts on the system. As this patch > stands now, it's pretty hard to see what consequences it has. If it was > split into the above suggestion, or a version thereof if it's not > feasible, the changes and their reasons might become obvious. > > > > > > > Further slicing might be doable, but certain things like separating 3 > and 4 > > is not possible since the current odp_buffer_pool_create() is intimately > > tied to the current structure, which is what is being replaced. I > consider > > 1 and 2 as part of that restructure. The only reason why > > odp_buffer_pool_destroy() and odp_buffer_pool_info() can be separated is > > because they are new APIs, but again separating them into separate > patches > > is basically wasted motion here since they are not optional pieces of the > > API, but the latest patch does that in the interest of being responsive > to > > comments. > > I disagree. This patch breaks odp_example. Perfect time for a lesson, > and here we go. > > The only thing I can say now is, the odp_example works without your > three patches, and with your first (humongous) patch it doesn't. I can't > be expected to actually inspect the entire patch to figure out what went > wrong with *your* code. > > If it was split up into logically isolated commits, I could at least say > "Hey, I bisected your commits and found that commit so and so > regressed". At the moment, all I can say is "After applying this patch > the odp_example doesn't terminate... I killed it after 2hours and 11 > minutes". This isn't something I should have to tell you though, as it > should have been tested before the patch went for review. > > F.ex. the api-change to odp_buffer_pool_create (taking the data-clump > params instead of singular arguments), would be a perfectly isolated > change that should have it's own commit. > > Fix the bug and please split up the patch into more manageable pieces. > As it stands, logical flow is intermixed with refactoring and is pretty > much > unreviewable (what do you say?). > > > > > I'd really prefer that we focus on reviewing patch contents rather than > > packaging, given that we're building ODP here, not making incremental > > changes to an established product. > > See above. > > > > > > > > see more comments inline. > > > > > > On 2 December 2014 at 22:50, Bill Fischofer <bill.fischofer@linaro.org > > > > > wrote: > > > > > > > > > > > > On Tue, Dec 2, 2014 at 3:05 PM, Anders Roxell < > anders.roxell@linaro.org> > > > > wrote: > > > >> > > > >> prefix this patch with: > > > >> api: ... > > > >> > > > >> On 2014-12-02 13:17, Bill Fischofer wrote: > > > >> > Restructure ODP buffer pool internals to support new APIs. > > > >> > > > >> The comment doesn't add any extra value from the short log. > > > >> "Modifys linux-generic, example and test to make them ready for > adding > > > the > > > >> new odp_buffer_pool_create API" > > > > > > > > > > > > The comment is descriptive of what's in the patch. > > > > > > > >> > > > >> > > > >> > Implements new odp_buffer_pool_create() API. > > > >> > > > > >> > Signed-off-by: Bill Fischofer <bill.fischofer@linaro.org> > > > >> > --- > > > >> > example/generator/odp_generator.c | 19 +- > > > >> > example/ipsec/odp_ipsec.c | 57 +- > > > >> > example/l2fwd/odp_l2fwd.c | 19 +- > > > >> > example/odp_example/odp_example.c | 18 +- > > > >> > example/packet/odp_pktio.c | 19 +- > > > >> > example/timer/odp_timer_test.c | 13 +- > > > >> > .../linux-generic/include/api/odp_buffer_pool.h | 91 ++- > > > >> > platform/linux-generic/include/api/odp_config.h | 10 + > > > >> > .../linux-generic/include/api/odp_platform_types.h | 9 + > > > >> > > > >> Group stuff into odp_platform_types.h should be its own patch. > > > >> > > > > > > > > The change to odp_platform_types.h moves typedefs from > > > odp_shared_memory.h > > > > to break > > > > circular dependencies that would otherwise arise. As a result, this > is > > > not > > > > separable from > > > > the rest of this patch. > > > > > > don't agree. > > > > > > > > > > > > > > >> > > > >> > .../linux-generic/include/api/odp_shared_memory.h | 10 +- > > > >> > .../linux-generic/include/odp_buffer_inlines.h | 157 +++++ > > > >> > > > >> Creating an inline file should be its own patch. > > > > > > > > > > > > No, it's not independent of the rest of these changes. This is a > > > > restructuring patch. The rule that > > > > you've promoted is that each patch can be applied independently. > Trying > > > to > > > > make this it's own > > > > patch wouldn't follow that rule. > > > > > > Good that you are trying. > > > You are saying "ODP buffer pool restructure" in the short log, please > > > do that and *only* that in this patch then! > > > Do not add new APIs or change existing APIs, only restructure! > > > > > > > > > > >> > > > >> > > > >> > .../linux-generic/include/odp_buffer_internal.h | 137 ++-- > > > >> > .../include/odp_buffer_pool_internal.h | 278 ++++++-- > > > >> > .../linux-generic/include/odp_packet_internal.h | 50 +- > > > >> > .../linux-generic/include/odp_timer_internal.h | 11 +- > > > >> > platform/linux-generic/odp_buffer.c | 31 +- > > > >> > platform/linux-generic/odp_buffer_pool.c | 711 > > > >> > +++++++++------------ > > > >> > platform/linux-generic/odp_packet.c | 41 +- > > > >> > platform/linux-generic/odp_queue.c | 1 + > > > >> > platform/linux-generic/odp_schedule.c | 20 +- > > > >> > platform/linux-generic/odp_timer.c | 3 +- > > > >> > test/api_test/odp_timer_ping.c | 19 +- > > > >> > test/validation/odp_crypto.c | 43 +- > > > >> > test/validation/odp_queue.c | 19 +- > > > >> > 24 files changed, 1024 insertions(+), 762 deletions(-) > > > >> > create mode 100644 > > > platform/linux-generic/include/odp_buffer_inlines.h > > > >> > > > > >> > > > >> [...] > > > >> > > > >> > diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h > > > >> > b/platform/linux-generic/include/api/odp_buffer_pool.h > > > >> > index 30b83e0..7022daa 100644 > > > >> > --- a/platform/linux-generic/include/api/odp_buffer_pool.h > > > >> > +++ b/platform/linux-generic/include/api/odp_buffer_pool.h > > > >> > @@ -36,32 +36,101 @@ extern "C" { > > > >> > #define ODP_BUFFER_POOL_INVALID 0 > > > >> > > > > >> > /** > > > >> > + * Buffer pool parameters > > > >> > + * Used to communicate buffer pool creation options. > > > >> > + */ > > > >> > +typedef struct odp_buffer_pool_param_t { > > > >> > + size_t buf_size; /**< Buffer size in bytes. The maximum > > > >> > + number of bytes application will > > > >> > > > >> "...bytes the application..." > > > > > > > > > > > > The definite article is optional in english grammar here. This > level of > > > > nit-picking isn't > > > > needed. > > > > > > yes, its a nit that you can fix when you sen version 5 or whatever > > > version you will send out. > > > > > > > You misunderstood. As written it's perfectly valid standard English. > > English has more than one way of saying things. Are you saying you were > > unable to understand the comment? I appreciate that you might have > chosen > > to write it differently, but you didn't write it. > > You are the native English spoken person of us, *not* me... =) > > Excluding the above example, in general grammatical changes may be nits > and they should *not* hold up any patch that are trying to go in. > However, if one has to redo the patch and send out a new version the > nits shall be fixed as well! > > > > > > > > > > > > >> > > > >> > > > >> > + store in each buffer. */ > > > >> > + size_t buf_align; /**< Minimum buffer alignment in bytes. > > > >> > + Valid values are powers of two. Use > 0 > > > >> > + for default alignment. Default will > > > >> > + always be a multiple of 8. */ > > > >> > + uint32_t num_bufs; /**< Number of buffers in the pool */ > > > >> > + int buf_type; /**< Buffer type */ > > > >> > +} odp_buffer_pool_param_t; > > > >> > + > > > >> > +/** > > > >> > * Create a buffer pool > > > >> > + * This routine is used to create a buffer pool. It take three > > > >> > + * arguments: the optional name of the pool to be created, an > > > optional > > > >> > shared > > > >> > + * memory handle, and a parameter struct that describes the pool > to > > > be > > > >> > + * created. If a name is not specified the result is an anonymous > > > pool > > > >> > that > > > >> > + * cannot be referenced by odp_buffer_pool_lookup(). > > > >> > * > > > >> > - * @param name Name of the pool (max > ODP_BUFFER_POOL_NAME_LEN - > > > 1 > > > >> > chars) > > > >> > - * @param base_addr Pool base address > > > >> > - * @param size Pool size in bytes > > > >> > - * @param buf_size Buffer size in bytes > > > >> > - * @param buf_align Minimum buffer alignment > > > >> > - * @param buf_type Buffer type > > > >> > + * @param[in] name Name of the pool, max > > > ODP_BUFFER_POOL_NAME_LEN-1 > > > >> > chars. > > > >> > + * May be specified as NULL for anonymous > pools. > > > >> > * > > > >> > - * @return Buffer pool handle > > > >> > + * @param[in] shm The shared memory object in which to > create > > > the > > > >> > pool. > > > >> > + * Use ODP_SHM_NULL to reserve default memory > > > type > > > >> > + * for the buffer type. > > > >> > + * > > > >> > + * @param[in] params Buffer pool parameters. > > > >> > + * > > > >> > + * @return Buffer pool handle or ODP_BUFFER_POOL_INVALID if call > > > >> > failed. > > > >> > > > >> Should be > > > >> @retval Buffer pool handle on success > > > >> @retval ODP_BUFFER_POOL_INVALID if call failed 1 (if it can fail > list > > > the > > > >> reasons) > > > >> @retval ODP_BUFFER_POOL_INVALID if call failed 2 (if it can fail > list > > > the > > > >> reasons) > > > >> @retval ODP_BUFFER_POOL_INVALID if call failed N > > > > > > > > > > > > The documentation is consistent with that used in the rest of the > file. > > > If > > > > we want a doc cleanup patch > > > > that should be a separate patch and cover the whole file, not just > one > > > > routine that would otherwise stand > > > > out as an anomaly. I'll be happy to write that after this patch gets > > > > merged. > > > > > > Wasn't this a "ODP buffer pool restructure" patch, I would say that > > > this goes under restructure and or maybe it goes under a new patch > > > "api: change odp_buffer_pool_create" =) > > > > > > > As soon as this patch is merged I'll submit a patch to change all of the > > docs. I'll expect it to be approved quickly. :) > > You have more or less rewritten the entire file and that is a reason for > you to change this in a separate (and isolated!!) patch in this patch set. > > > > > > > > > > > > > > > >> > > > >> > > > >> > */ > > > >> > + > > > >> > odp_buffer_pool_t odp_buffer_pool_create(const char *name, > > > >> > - void *base_addr, uint64_t > size, > > > >> > - size_t buf_size, size_t > > > >> > buf_align, > > > >> > - int buf_type); > > > >> > + odp_shm_t shm, > > > >> > + odp_buffer_pool_param_t > > > *params); > > > >> > > > > >> > +/** > > > >> > + * Destroy a buffer pool previously created by > > > odp_buffer_pool_create() > > > >> > + * > > > >> > + * @param[in] pool Handle of the buffer pool to be destroyed > > > >> > + * > > > >> > + * @return 0 on Success, -1 on Failure. > > > >> > > > >> use @retval here as well and list the reasons how it can fail.] > > > > > > > > > > > > Same comment as above. > > > > > > I'm going to copy you here: > > > "Same comment as above." =) > > > > > > > > > > >> > > > >> > > > >> > + * > > > >> > + * @note This routine destroys a previously created buffer pool. > This > > > >> > call > > > >> > + * does not destroy any shared memory object passed to > > > >> > + * odp_buffer_pool_create() used to store the buffer pool > contents. > > > The > > > >> > caller > > > >> > + * takes responsibility for that. If no shared memory object was > > > passed > > > >> > as > > > >> > + * part of the create call, then this routine will destroy any > > > internal > > > >> > shared > > > >> > + * memory objects associated with the buffer pool. Results are > > > >> > undefined if > > > >> > + * an attempt is made to destroy a buffer pool that contains > > > allocated > > > >> > or > > > >> > + * otherwise active buffers. > > > >> > + */ > > > >> > +int odp_buffer_pool_destroy(odp_buffer_pool_t pool); > > > >> > > > >> This doesn't belong in this patch, belongs in the > > > >> odp_buffer_pool_destroy patch. > > > >> > > > > > > > > That patch is for the implementation of the function, as described. > > > This is > > > > benign here. > > > > > > > >> > > > >> > > > > >> > /** > > > >> > * Find a buffer pool by name > > > >> > * > > > >> > - * @param name Name of the pool > > > >> > + * @param[in] name Name of the pool > > > >> > * > > > >> > * @return Buffer pool handle, or ODP_BUFFER_POOL_INVALID if not > > > found. > > > >> > > > >> Fix this. > > > > > > > > > > > > Same comments as above. > > > > > > > >> > > > >> > > > >> > + * > > > >> > + * @note This routine cannot be used to look up an anonymous pool > > > (one > > > >> > created > > > >> > + * with no name). > > > >> > > > >> How can I delete an anonymous pool? > > > > > > > > > > > > You can't. This is just implementing what's been specified. If we > want > > > to > > > > change the spec > > > > that can be addressed in a follow-on patch. > > > > > > Ok I didn't know thank you for the explanation. > > > > > > > You're welcome. > > > > > > > > > > > > > > >> > > > >> > > > >> > */ > > > >> > odp_buffer_pool_t odp_buffer_pool_lookup(const char *name); > > > >> > > > > >> > +/** > > > >> > + * Buffer pool information struct > > > >> > + * Used to get information about a buffer pool. > > > >> > + */ > > > >> > +typedef struct odp_buffer_pool_info_t { > > > >> > + const char *name; /**< pool name */ > > > >> > + odp_buffer_pool_param_t params; /**< pool parameters */ > > > >> > +} odp_buffer_pool_info_t; > > > >> > + > > > >> > +/** > > > >> > + * Retrieve information about a buffer pool > > > >> > + * > > > >> > + * @param[in] pool Buffer pool handle > > > >> > + * > > > >> > + * @param[out] shm Recieves odp_shm_t supplied by caller at > > > >> > + * pool creation, or ODP_SHM_NULL if the > > > >> > + * pool is managed internally. > > > >> > + * > > > >> > + * @param[out] info Receives an odp_buffer_pool_info_t object > > > >> > + * that describes the pool. > > > >> > + * > > > >> > + * @return 0 on success, -1 if info could not be retrieved. > > > >> > > > >> Fix > > > > > > > > > > > > Same doc comments as above. > > > > > > > >> > > > >> > > > >> > + */ > > > >> > + > > > >> > +int odp_buffer_pool_info(odp_buffer_pool_t pool, odp_shm_t *shm, > > > >> > + odp_buffer_pool_info_t *info); > > > >> > > > >> This doesn't belong in this patch, belongs in the > > > >> odp_buffer_pool_info patch. > > > >> > > > >> Again, the separate patch implements these functions. These are > benign. > > > > > > > > > > > >> > > > >> > > > > >> > /** > > > >> > * Print buffer pool info > > > >> > diff --git a/platform/linux-generic/include/api/odp_config.h > > > >> > b/platform/linux-generic/include/api/odp_config.h > > > >> > index 906897c..1226d37 100644 > > > >> > --- a/platform/linux-generic/include/api/odp_config.h > > > >> > +++ b/platform/linux-generic/include/api/odp_config.h > > > >> > @@ -49,6 +49,16 @@ extern "C" { > > > >> > #define ODP_CONFIG_PKTIO_ENTRIES 64 > > > >> > > > > >> > /** > > > >> > + * Segment size to use - > > > >> > > > >> What does "-" mean? > > > >> Can you elaborate more on this? > > > > > > > > > > > > It's a stray character. > > > > > > gah, I'm sorry for beeing unclear. > > > I meant "-" remove! > > > > > > and can you elaborate more and not only say "Segment size to use". > > > > > > > > > > >> > > > >> > > > >> > + */ > > > >> > +#define ODP_CONFIG_BUF_SEG_SIZE (512*3) > > > >> > + > > > >> > +/** > > > >> > + * Maximum buffer size supported > > > >> > + */ > > > >> > +#define ODP_CONFIG_BUF_MAX_SIZE (ODP_CONFIG_BUF_SEG_SIZE*7) > > > >> > > > >> Isn't this platform specific? > > > > > > > > > > > > Yes, and this is platform/linux-generic. I've chosen this for now > > > because > > > > the current linux-generic > > > > packet I/O doesn't support scatter/gather reads/writes. > > > > > > Bill I know this is linux-generic, I was unclear again. > > > > > > Why do you place this in odp_config.h and not in odp_platform_types.h? > > > > > > > odp_platform_types.h is for typedefs. These are implementation limits, > > like number of buffer pools we > > support, etc. It's the proper file for these sort of limits since you > can > > change variables here and get a > > different configuration of linux-generic. > > > > > > > > > > > > > > >> > > > >> > > > >> > + > > > >> > +/** > > > >> > * @} > > > >> > */ > > > >> > > > > >> > diff --git > a/platform/linux-generic/include/api/odp_platform_types.h > > > >> > b/platform/linux-generic/include/api/odp_platform_types.h > > > >> > index 4db47d3..b9b3aea 100644 > > > >> > --- a/platform/linux-generic/include/api/odp_platform_types.h > > > >> > +++ b/platform/linux-generic/include/api/odp_platform_types.h > > > >> > @@ -65,6 +65,15 @@ typedef uint32_t odp_pktio_t; > > > >> > #define ODP_PKTIO_ANY ((odp_pktio_t)~0) > > > >> > > > > >> > /** > > > >> > + * ODP shared memory block > > > >> > + */ > > > >> > +typedef uint32_t odp_shm_t; > > > >> > + > > > >> > +/** Invalid shared memory block */ > > > >> > +#define ODP_SHM_INVALID 0 > > > >> > +#define ODP_SHM_NULL ODP_SHM_INVALID /**< Synonym for buffer > pool use > > > >> > */ > > > >> > > > >> ODP_SHM_* touches shm functionality and should be in its own patch > to > > > >> fix/move it. > > > > > > > > > > > > Already discussed above. > > > >> > > > >> > > > >> > + > > > >> > +/** > > > >> > * @} > > > >> > */ > > > >> > > > > >> > diff --git > a/platform/linux-generic/include/api/odp_shared_memory.h > > > >> > b/platform/linux-generic/include/api/odp_shared_memory.h > > > >> > index 26e208b..f70db5a 100644 > > > >> > --- a/platform/linux-generic/include/api/odp_shared_memory.h > > > >> > +++ b/platform/linux-generic/include/api/odp_shared_memory.h > > > >> > @@ -20,6 +20,7 @@ extern "C" { > > > >> > > > > >> > > > > >> > #include <odp_std_types.h> > > > >> > +#include <odp_platform_types.h> > > > >> > > > >> Not relevant for the odp_buffer_pool_create > > > > > > > > > > > > Incorrect. It is part of the restructure for reasons discussed > above. > > > > > > OK, for restructuring but not for odp_buffer_pool_create =) > > > > > > > > > > >> > > > >> > > > >> > > > > >> > /** @defgroup odp_shared_memory ODP SHARED MEMORY > > > >> > * Operations on shared memory. > > > >> > @@ -38,15 +39,6 @@ extern "C" { > > > >> > #define ODP_SHM_PROC 0x2 /**< Share with external processes */ > > > >> > > > > >> > /** > > > >> > - * ODP shared memory block > > > >> > - */ > > > >> > -typedef uint32_t odp_shm_t; > > > >> > - > > > >> > -/** Invalid shared memory block */ > > > >> > -#define ODP_SHM_INVALID 0 > > > >> > - > > > >> > - > > > >> > -/** > > > >> > * Shared memory block info > > > >> > */ > > > >> > typedef struct odp_shm_info_t { > > > >> > diff --git a/platform/linux-generic/include/odp_buffer_inlines.h > > > >> > b/platform/linux-generic/include/odp_buffer_inlines.h > > > >> > new file mode 100644 > > > >> > index 0000000..f33b41d > > > >> > --- /dev/null > > > >> > +++ b/platform/linux-generic/include/odp_buffer_inlines.h > > > >> > @@ -0,0 +1,157 @@ > > > >> > +/* Copyright (c) 2014, Linaro Limited > > > >> > + * All rights reserved. > > > >> > + * > > > >> > + * SPDX-License-Identifier: BSD-3-Clause > > > >> > + */ > > > >> > + > > > >> > +/** > > > >> > + * @file > > > >> > + * > > > >> > + * Inline functions for ODP buffer mgmt routines - implementation > > > >> > internal > > > >> > + */ > > > >> > + > > > >> > +#ifndef ODP_BUFFER_INLINES_H_ > > > >> > +#define ODP_BUFFER_INLINES_H_ > > > >> > + > > > >> > +#ifdef __cplusplus > > > >> > +extern "C" { > > > >> > +#endif > > > >> > + > > > >> > +static inline odp_buffer_t > odp_buffer_encode_handle(odp_buffer_hdr_t > > > >> > *hdr) > > > >> > +{ > > > >> > + odp_buffer_bits_t handle; > > > >> > + uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl); > > > >> > + struct pool_entry_s *pool = get_pool_entry(pool_id); > > > >> > + > > > >> > + handle.pool_id = pool_id; > > > >> > + handle.index = ((uint8_t *)hdr - pool->pool_base_addr) / > > > >> > + ODP_CACHE_LINE_SIZE; > > > >> > + handle.seg = 0; > > > >> > + > > > >> > + return handle.u32; > > > >> > +} > > > >> > + > > > >> > +static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) > > > >> > +{ > > > >> > + odp_buffer_t hdl = odp_buffer_encode_handle(hdr); > > > >> > + if (hdl != hdr->handle.handle) { > > > >> > + ODP_DBG("buf %p should have handle %x but is cached > as > > > >> > %x\n", > > > >> > + hdr, hdl, hdr->handle.handle); > > > >> > + hdr->handle.handle = hdl; > > > >> > + } > > > >> > + return hdr->handle.handle; > > > >> > +} > > > >> > + > > > >> > +static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > > > >> > +{ > > > >> > + odp_buffer_bits_t handle; > > > >> > + uint32_t pool_id; > > > >> > + uint32_t index; > > > >> > + struct pool_entry_s *pool; > > > >> > + > > > >> > + handle.u32 = buf; > > > >> > + pool_id = handle.pool_id; > > > >> > + index = handle.index; > > > >> > + > > > >> > +#ifdef POOL_ERROR_CHECK > > > >> > + if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { > > > >> > + ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); > > > >> > + return NULL; > > > >> > + } > > > >> > +#endif > > > >> > + > > > >> > + pool = get_pool_entry(pool_id); > > > >> > + > > > >> > +#ifdef POOL_ERROR_CHECK > > > >> > + if (odp_unlikely(index > pool->params.num_bufs - 1)) { > > > >> > + ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); > > > >> > + return NULL; > > > >> > + } > > > >> > +#endif > > > >> > + > > > >> > + return (odp_buffer_hdr_t *)(void *) > > > >> > + (pool->pool_base_addr + (index * > ODP_CACHE_LINE_SIZE)); > > > >> > +} > > > >> > + > > > >> > +static inline uint32_t odp_buffer_refcount(odp_buffer_hdr_t *buf) > > > >> > +{ > > > >> > + return odp_atomic_load_u32(&buf->ref_count); > > > >> > +} > > > >> > + > > > >> > +static inline uint32_t odp_buffer_incr_refcount(odp_buffer_hdr_t > > > *buf, > > > >> > + uint32_t val) > > > >> > +{ > > > >> > + return odp_atomic_fetch_add_u32(&buf->ref_count, val) + val; > > > >> > +} > > > >> > + > > > >> > +static inline uint32_t odp_buffer_decr_refcount(odp_buffer_hdr_t > > > *buf, > > > >> > + uint32_t val) > > > >> > +{ > > > >> > + uint32_t tmp; > > > >> > + > > > >> > + tmp = odp_atomic_fetch_sub_u32(&buf->ref_count, val); > > > >> > + > > > >> > + if (tmp < val) { > > > >> > + odp_atomic_fetch_add_u32(&buf->ref_count, val - > tmp); > > > >> > + return 0; > > > >> > + } else { > > > >> > > > >> drop the else statement > > > > > > > > > > > > That would be erroneous code. Refcounts don't go below 0. This code > > > > ensures that. > > > > > > Bill, I was unclear again. > > > I thought you understood that I meant only remove "else" and move out > > > return on tab! > > > > > > like this: > > > > > > if (tmp < val) { > > > odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); > > > return 0; > > > } > > > return tmp - val; > > > > > > > De gustibus non est disputandum. > > > > Actually, having the else makes the code clearer and less prone to error > > introduction in future updates. I'm sure you'll agree that there is no > > performance difference between the two. > > That may be, but it's established practice in our code base, and having > a de facto "coding standard" makes code *familiar* and thus easier to read. > As we have no formalized coding standard, this could probably be changed, > but that's another discussion and another patch over the entire code base. > > > > > > > > > > > > > > > > > >> > > > >> > > > >> > + return tmp - val; > > > >> > + } > > > >> > +} > > > >> > + > > > >> > +static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) > > > >> > +{ > > > >> > + odp_buffer_bits_t handle; > > > >> > + odp_buffer_hdr_t *buf_hdr; > > > >> > + handle.u32 = buf; > > > >> > + > > > >> > + /* For buffer handles, segment index must be 0 */ > > > >> > > > >> Why does the buffer handle always have to have a segment index that > must > > > >> be 0? > > > > > > > > > > > > Because that's how I've defined it in this implementation. > > > > validate_buffer() can be > > > > given any 32-bit value and it will robustly say whether or not it is > a > > > valid > > > > buffer handle. > > > > > > hmm... OK, I will look again > > > > > > > > > > >> > > > >> > > > >> > + if (handle.seg != 0) > > > >> > + return NULL; > > > >> > > > >> Why do we need to check everything? > > > >> shouldn't we trust our internal stuff to be sent correctly? > > > >> Maybe it should be an ODP_ASSERT? > > > > > > > > > > > > No, odp_buffer_is_valid() does not assert. It returns a yes/no > value for > > > > any > > > > input value. > > > > > > > >> > > > >> > > > >> > + > > > >> > + pool_entry_t *pool = odp_pool_to_entry(handle.pool_id); > > > >> > + > > > >> > + /* If pool not created, handle is invalid */ > > > >> > + if (pool->s.pool_shm == ODP_SHM_INVALID) > > > >> > + return NULL; > > > >> > > > >> The same applies here. > > > > > > > > > > > > Same answer. > > > > > > > >> > > > >> > > > >> > + > > > >> > + uint32_t buf_stride = pool->s.buf_stride / > ODP_CACHE_LINE_SIZE; > > > >> > + > > > >> > + /* A valid buffer index must be on stride, and must be in > range > > > */ > > > >> > + if ((handle.index % buf_stride != 0) || > > > >> > + ((uint32_t)(handle.index / buf_stride) >= > > > >> > pool->s.params.num_bufs)) > > > >> > + return NULL; > > > >> > + > > > >> > + buf_hdr = (odp_buffer_hdr_t *)(void *) > > > >> > + (pool->s.pool_base_addr + > > > >> > + (handle.index * ODP_CACHE_LINE_SIZE)); > > > >> > + > > > >> > + /* Handle is valid, so buffer is valid if it is allocated */ > > > >> > + if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) > > > >> > + return NULL; > > > >> > + else > > > >> > > > >> Drop the else > > > > > > > > > > > > No, that would be erroneous. A buffer handle is no longer valid if > > > > the buffer has been freed. That's what's being checked here. > > > > > > again: > > > /* Handle is valid, so buffer is valid if it is allocated */ > > > if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) > > > return NULL; > > > return buf_hdr; > > > > > > Same comment as above. If I didn't think having the else here was > clearer > > I would not have written the code that way. The style passes checkpatch, > > which should be sufficient for reviewers. > > See above regarding coding style. > > > > > > > > > > > > > > > >> > > > >> > > > >> > + return buf_hdr; > > > >> > +} > > > >> > + > > > >> > +int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > > > >> > + > > > >> > +static inline void *buffer_map(odp_buffer_hdr_t *buf, > > > >> > + size_t offset, > > > >> > + size_t *seglen, > > > >> > + size_t limit) > > > >> > +{ > > > >> > + int seg_index = offset / buf->segsize; > > > >> > + int seg_offset = offset % buf->segsize; > > > >> > + size_t buf_left = limit - offset; > > > >> > + > > > >> > + *seglen = buf_left < buf->segsize ? > > > >> > + buf_left : buf->segsize - seg_offset; > > > >> > + > > > >> > + return (void *)(seg_offset + (uint8_t > *)buf->addr[seg_index]); > > > >> > +} > > > >> > + > > > >> > +#ifdef __cplusplus > > > >> > +} > > > >> > +#endif > > > >> > + > > > >> > +#endif > > > >> > diff --git a/platform/linux-generic/include/odp_buffer_internal.h > > > >> > b/platform/linux-generic/include/odp_buffer_internal.h > > > >> > index 0027bfc..29666db 100644 > > > >> > --- a/platform/linux-generic/include/odp_buffer_internal.h > > > >> > +++ b/platform/linux-generic/include/odp_buffer_internal.h > > > >> > @@ -24,99 +24,118 @@ extern "C" { > > > >> > #include <odp_buffer.h> > > > >> > #include <odp_debug.h> > > > >> > #include <odp_align.h> > > > >> > - > > > >> > -/* TODO: move these to correct files */ > > > >> > - > > > >> > -typedef uint64_t odp_phys_addr_t; > > > >> > - > > > >> > -#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > > > >> > -#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > > > >> > - > > > >> > -#define ODP_BUFS_PER_CHUNK 16 > > > >> > -#define ODP_BUFS_PER_SCATTER 4 > > > >> > - > > > >> > -#define ODP_BUFFER_TYPE_CHUNK 0xffff > > > >> > - > > > >> > +#include <odp_config.h> > > > >> > +#include <odp_byteorder.h> > > > >> > +#include <odp_thread.h> > > > >> > + > > > >> > + > > > >> > +#define ODP_BUFFER_MAX_SEG > > > >> > (ODP_CONFIG_BUF_MAX_SIZE/ODP_CONFIG_BUF_SEG_SIZE) > > > >> > +#define ODP_MAX_INLINE_BUF (sizeof(void *) * > (ODP_BUFFER_MAX_SEG > > > - > > > >> > 1)) > > > >> > + > > > >> > +ODP_STATIC_ASSERT((ODP_CONFIG_BUF_SEG_SIZE % > ODP_CACHE_LINE_SIZE) == > > > 0, > > > >> > + "ODP Segment size must be a multiple of cache line > > > >> > size"); > > > >> > + > > > >> > +#define ODP_SEGBITS(x) \ > > > >> > + ((x) < 2 ? 1 : \ > > > >> > + ((x) < 4 ? 2 : \ > > > >> > + ((x) < 8 ? 3 : \ > > > >> > + ((x) < 16 ? 4 : \ > > > >> > + ((x) < 32 ? 5 : \ > > > >> > + ((x) < 64 ? 6 : \ > > > >> > > > >> Do you need to add the tab "6 :<tab>\" > > > > > > > > > > > > I'm not sure I understand the comment. > > > > > > fix your editor please! > > > > > > > I'm using emacs with style = linux. > > You have a tab-character instead of a space before the last backslash on > the line I specified. > > The "fix your editor" comment means you should find a way to visualize > whitespace characters in your editor so they're obvious. > > > > > > > > > > > > > > > >> > > > >> > > > >> > + ((x) < 128 ? 7 : \ > > > >> > + ((x) < 256 ? 8 : \ > > > >> > + ((x) < 512 ? 9 : \ > > > >> > + ((x) < 1024 ? 10 : \ > > > >> > + ((x) < 2048 ? 11 : \ > > > >> > + ((x) < 4096 ? 12 : \ > > > >> > + (0/0))))))))))))) > > > >> > + > > > >> > +ODP_STATIC_ASSERT(ODP_SEGBITS(ODP_BUFFER_MAX_SEG) < > > > >> > + ODP_SEGBITS(ODP_CACHE_LINE_SIZE), > > > >> > + "Number of segments must not exceed log of cache > line > > > >> > size"); > > > >> > > > > >> > #define ODP_BUFFER_POOL_BITS 4 > > > >> > -#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS) > > > >> > +#define ODP_BUFFER_SEG_BITS ODP_SEGBITS(ODP_CACHE_LINE_SIZE) > > > >> > +#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS - > > > >> > ODP_BUFFER_SEG_BITS) > > > >> > +#define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS + > > > >> > ODP_BUFFER_INDEX_BITS) > > > >> > #define ODP_BUFFER_MAX_POOLS (1 << ODP_BUFFER_POOL_BITS) > > > >> > #define ODP_BUFFER_MAX_BUFFERS (1 << ODP_BUFFER_INDEX_BITS) > > > >> > > > > >> > +#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) > > > >> > +#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) > > > >> > + > > > >> > typedef union odp_buffer_bits_t { > > > >> > uint32_t u32; > > > >> > odp_buffer_t handle; > > > >> > > > > >> > struct { > > > >> > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > > > >> > uint32_t pool_id:ODP_BUFFER_POOL_BITS; > > > >> > uint32_t index:ODP_BUFFER_INDEX_BITS; > > > >> > + uint32_t seg:ODP_BUFFER_SEG_BITS; > > > >> > +#else > > > >> > + uint32_t seg:ODP_BUFFER_SEG_BITS; > > > >> > + uint32_t index:ODP_BUFFER_INDEX_BITS; > > > >> > + uint32_t pool_id:ODP_BUFFER_POOL_BITS; > > > >> > +#endif > > > >> > > > >> and this will work on 64bit platforms? > > > > > > > > > > > > Yes. I'm developing on a 64-bit platform. > > > > > > OK > > > > > > > > > > >> > > > >> > > > >> > }; > > > >> > -} odp_buffer_bits_t; > > > >> > > > > >> > + struct { > > > >> > +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN > > > >> > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > > > >> > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > > > >> > +#else > > > >> > + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; > > > >> > + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; > > > >> > +#endif > > > >> > + }; > > > >> > +} odp_buffer_bits_t; > > > >> > > > > >> > /* forward declaration */ > > > >> > struct odp_buffer_hdr_t; > > > >> > > > > >> > - > > > >> > -/* > > > >> > - * Scatter/gather list of buffers > > > >> > - */ > > > >> > -typedef struct odp_buffer_scatter_t { > > > >> > - /* buffer pointers */ > > > >> > - struct odp_buffer_hdr_t *buf[ODP_BUFS_PER_SCATTER]; > > > >> > - int num_bufs; /* num buffers */ > > > >> > - int pos; /* position on the > list */ > > > >> > - size_t total_len; /* Total length */ > > > >> > -} odp_buffer_scatter_t; > > > >> > - > > > >> > - > > > >> > -/* > > > >> > - * Chunk of buffers (in single pool) > > > >> > - */ > > > >> > -typedef struct odp_buffer_chunk_t { > > > >> > - uint32_t num_bufs; /* num buffers */ > > > >> > - uint32_t buf_index[ODP_BUFS_PER_CHUNK]; /* buffers */ > > > >> > -} odp_buffer_chunk_t; > > > >> > - > > > >> > - > > > >> > /* Common buffer header */ > > > >> > typedef struct odp_buffer_hdr_t { > > > >> > struct odp_buffer_hdr_t *next; /* next buf in a list > */ > > > >> > + int allocator; /* allocating thread > id */ > > > >> > odp_buffer_bits_t handle; /* handle */ > > > >> > - odp_phys_addr_t phys_addr; /* physical data start > > > >> > address */ > > > >> > - void *addr; /* virtual data start > > > address > > > >> > */ > > > >> > - uint32_t index; /* buf index in the > pool */ > > > >> > + union { > > > >> > + uint32_t all; > > > >> > + struct { > > > >> > + uint32_t zeroized:1; /* Zeroize buf data on > free > > > >> > */ > > > >> > + uint32_t hdrdata:1; /* Data is in buffer > hdr */ > > > >> > + }; > > > >> > + } flags; > > > >> > + int type; /* buffer type */ > > > >> > size_t size; /* max data size */ > > > >> > - size_t cur_offset; /* current offset */ > > > >> > odp_atomic_u32_t ref_count; /* reference count */ > > > >> > - odp_buffer_scatter_t scatter; /* Scatter/gather list > */ > > > >> > - int type; /* type of next header > */ > > > >> > odp_buffer_pool_t pool_hdl; /* buffer pool handle > */ > > > >> > - > > > >> > + union { > > > >> > + void *buf_ctx; /* user context */ > > > >> > + void *udata_addr; /* user metadata addr > */ > > > >> > + }; > > > >> > + size_t udata_size; /* size of user > metadata */ > > > >> > + uint32_t segcount; /* segment count */ > > > >> > + uint32_t segsize; /* segment size */ > > > >> > + void *addr[ODP_BUFFER_MAX_SEG]; /* block > > > addrs > > > >> > */ > > > >> > } odp_buffer_hdr_t; > > > >> > > > > >> > -/* Ensure next header starts from 8 byte align */ > > > >> > -ODP_STATIC_ASSERT((sizeof(odp_buffer_hdr_t) % 8) == 0, > > > >> > "ODP_BUFFER_HDR_T__SIZE_ERROR"); > > > >> > +typedef struct odp_buffer_hdr_stride { > > > >> > + uint8_t > > > >> > pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t))]; > > > >> > +} odp_buffer_hdr_stride; > > > >> > > > > >> > +typedef struct odp_buf_blk_t { > > > >> > + struct odp_buf_blk_t *next; > > > >> > + struct odp_buf_blk_t *prev; > > > >> > +} odp_buf_blk_t; > > > >> > > > > >> > /* Raw buffer header */ > > > >> > typedef struct { > > > >> > odp_buffer_hdr_t buf_hdr; /* common buffer header */ > > > >> > - uint8_t buf_data[]; /* start of buffer data area */ > > > >> > } odp_raw_buffer_hdr_t; > > > >> > > > > >> > - > > > >> > -/* Chunk header */ > > > >> > -typedef struct odp_buffer_chunk_hdr_t { > > > >> > - odp_buffer_hdr_t buf_hdr; > > > >> > - odp_buffer_chunk_t chunk; > > > >> > -} odp_buffer_chunk_hdr_t; > > > >> > - > > > >> > - > > > >> > -int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); > > > >> > - > > > >> > -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t > > > >> > buf_src); > > > >> > - > > > >> > +/* Forward declarations */ > > > >> > +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool, size_t size); > > > >> > > > > >> > #ifdef __cplusplus > > > >> > } > > > >> > diff --git > a/platform/linux-generic/include/odp_buffer_pool_internal.h > > > >> > b/platform/linux-generic/include/odp_buffer_pool_internal.h > > > >> > index e0210bd..cd58f91 100644 > > > >> > --- a/platform/linux-generic/include/odp_buffer_pool_internal.h > > > >> > +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h > > > >> > @@ -25,6 +25,35 @@ extern "C" { > > > >> > #include <odp_hints.h> > > > >> > #include <odp_config.h> > > > >> > #include <odp_debug.h> > > > >> > +#include <odp_shared_memory.h> > > > >> > +#include <odp_atomic.h> > > > >> > +#include <odp_atomic_internal.h> > > > >> > +#include <string.h> > > > >> > + > > > >> > +/** > > > >> > + * Buffer initialization routine prototype > > > >> > + * > > > >> > + * @note Routines of this type MAY be passed as part of the > > > >> > + * _odp_buffer_pool_init_t structure to be called whenever a > > > >> > + * buffer is allocated to initialize the user metadata > > > >> > + * associated with that buffer. > > > >> > + */ > > > >> > +typedef void (_odp_buf_init_t)(odp_buffer_t buf, void > *buf_init_arg); > > > >> > + > > > >> > +/** > > > >> > + * Buffer pool initialization parameters > > > >> > + * > > > >> > + * @param[in] udata_size Size of the user metadata for each > > > buffer > > > >> > + * @param[in] buf_init Function pointer to be called to > > > >> > initialize the > > > >> > + * user metadata for each buffer in the > > > pool. > > > >> > + * @param[in] buf_init_arg Argument to be passed to buf_init(). > > > >> > + * > > > >> > + */ > > > >> > +typedef struct _odp_buffer_pool_init_t { > > > >> > + size_t udata_size; /**< Size of user metadata for > each > > > >> > buffer */ > > > >> > + _odp_buf_init_t *buf_init; /**< Buffer initialization > routine to > > > >> > use */ > > > >> > + void *buf_init_arg; /**< Argument to be passed to > > > >> > buf_init() */ > > > >> > +} _odp_buffer_pool_init_t; /**< Type of buffer > initialization > > > >> > struct */ > > > >> > > > > >> > /* Use ticketlock instead of spinlock */ > > > >> > #define POOL_USE_TICKETLOCK > > > >> > @@ -39,6 +68,17 @@ extern "C" { > > > >> > #include <odp_spinlock.h> > > > >> > #endif > > > >> > > > > >> > +#ifdef POOL_USE_TICKETLOCK > > > >> > +#include <odp_ticketlock.h> > > > >> > +#define LOCK(a) odp_ticketlock_lock(a) > > > >> > +#define UNLOCK(a) odp_ticketlock_unlock(a) > > > >> > +#define LOCK_INIT(a) odp_ticketlock_init(a) > > > >> > +#else > > > >> > +#include <odp_spinlock.h> > > > >> > +#define LOCK(a) odp_spinlock_lock(a) > > > >> > +#define UNLOCK(a) odp_spinlock_unlock(a) > > > >> > +#define LOCK_INIT(a) odp_spinlock_init(a) > > > >> > +#endif > > > >> > > > > >> > struct pool_entry_s { > > > >> > #ifdef POOL_USE_TICKETLOCK > > > >> > @@ -47,66 +87,224 @@ struct pool_entry_s { > > > >> > odp_spinlock_t lock ODP_ALIGNED_CACHE; > > > >> > #endif > > > >> > > > > >> > - odp_buffer_chunk_hdr_t *head; > > > >> > - uint64_t free_bufs; > > > >> > char name[ODP_BUFFER_POOL_NAME_LEN]; > > > >> > - > > > >> > - odp_buffer_pool_t pool_hdl ODP_ALIGNED_CACHE; > > > >> > - uintptr_t buf_base; > > > >> > - size_t buf_size; > > > >> > - size_t buf_offset; > > > >> > - uint64_t num_bufs; > > > >> > - void *pool_base_addr; > > > >> > - uint64_t pool_size; > > > >> > - size_t user_size; > > > >> > - size_t user_align; > > > >> > - int buf_type; > > > >> > - size_t hdr_size; > > > >> > + odp_buffer_pool_param_t params; > > > >> > + _odp_buffer_pool_init_t init_params; > > > >> > + odp_buffer_pool_t pool_hdl; > > > >> > + odp_shm_t pool_shm; > > > >> > + union { > > > >> > + uint32_t all; > > > >> > + struct { > > > >> > + uint32_t has_name:1; > > > >> > + uint32_t user_supplied_shm:1; > > > >> > + uint32_t unsegmented:1; > > > >> > + uint32_t zeroized:1; > > > >> > + uint32_t quiesced:1; > > > >> > + uint32_t low_wm_assert:1; > > > >> > + uint32_t predefined:1; > > > >> > + }; > > > >> > + } flags; > > > >> > + uint8_t *pool_base_addr; > > > >> > + size_t pool_size; > > > >> > + uint32_t buf_stride; > > > >> > + _odp_atomic_ptr_t buf_freelist; > > > >> > + _odp_atomic_ptr_t blk_freelist; > > > >> > + odp_atomic_u32_t bufcount; > > > >> > + odp_atomic_u32_t blkcount; > > > >> > + odp_atomic_u64_t bufallocs; > > > >> > + odp_atomic_u64_t buffrees; > > > >> > + odp_atomic_u64_t blkallocs; > > > >> > + odp_atomic_u64_t blkfrees; > > > >> > + odp_atomic_u64_t bufempty; > > > >> > + odp_atomic_u64_t blkempty; > > > >> > + odp_atomic_u64_t high_wm_count; > > > >> > + odp_atomic_u64_t low_wm_count; > > > >> > + size_t seg_size; > > > >> > + size_t high_wm; > > > >> > + size_t low_wm; > > > >> > + size_t headroom; > > > >> > + size_t tailroom; > > > >> > > > >> General comment add the same level of information into the variable > > > >> names. > > > >> > > > >> Not consistent use "_" used to separate words in variable names. > > > >> > > > > > > > > These are internal structs. Not relevant. > > > > > > so you mean that we shouldn't review internal code and > > > that its OK to bi inconsistent because its internal code? > > > > > > > I don't follow you here. You don't like the choice of variable names in > > the struct? > > See above regarding coding style. It doesn't really matter much if the > struct is for internal use only. *We* are the ones that have to maintain > the code, collectively, thus familiarity is key. Variables are named a > certain way throughout the entire code base, why have this single struct > be different? > > > > > > > > > > > > > > > >> > > > >> > > > >> > > > >> > }; > > > >> > > > > >> > +typedef union pool_entry_u { > > > >> > + struct pool_entry_s s; > > > >> > + > > > >> > + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct > > > >> > pool_entry_s))]; > > > >> > +} pool_entry_t; > > > >> > > > > >> > extern void *pool_entry_ptr[]; > > > >> > > > > >> > +#if defined(ODP_CONFIG_SECURE_POOLS) && (ODP_CONFIG_SECURE_POOLS > == > > > 1) > > > >> > +#define buffer_is_secure(buf) (buf->flags.zeroized) > > > >> > +#define pool_is_secure(pool) (pool->flags.zeroized) > > > >> > +#else > > > >> > +#define buffer_is_secure(buf) 0 > > > >> > +#define pool_is_secure(pool) 0 > > > >> > +#endif > > > >> > + > > > >> > +#define TAG_ALIGN ((size_t)16) > > > >> > > > > >> > -static inline void *get_pool_entry(uint32_t pool_id) > > > >> > +#define odp_cs(ptr, old, new) \ > > > >> > + _odp_atomic_ptr_cmp_xchg_strong(&ptr, (void **)&old, (void > > > *)new, > > > >> > \ > > > >> > + _ODP_MEMMODEL_SC, \ > > > >> > + _ODP_MEMMODEL_SC) > > > >> > + > > > >> > +/* Helper functions for pointer tagging to avoid ABA race > conditions > > > */ > > > >> > +#define odp_tag(ptr) \ > > > >> > + (((size_t)ptr) & (TAG_ALIGN - 1)) > > > >> > + > > > >> > +#define odp_detag(ptr) \ > > > >> > + ((typeof(ptr))(((size_t)ptr) & -TAG_ALIGN)) > > > >> > + > > > >> > +#define odp_retag(ptr, tag) \ > > > >> > + ((typeof(ptr))(((size_t)ptr) | odp_tag(tag))) > > > >> > + > > > >> > + > > > >> > +static inline void *get_blk(struct pool_entry_s *pool) > > > >> > { > > > >> > - return pool_entry_ptr[pool_id]; > > > >> > + void *oldhead, *myhead, *newhead; > > > >> > + > > > >> > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, > > > >> > _ODP_MEMMODEL_ACQ); > > > >> > + > > > >> > + do { > > > >> > + size_t tag = odp_tag(oldhead); > > > >> > + myhead = odp_detag(oldhead); > > > >> > + if (myhead == NULL) > > > >> > + break; > > > >> > + newhead = odp_retag(((odp_buf_blk_t *)myhead)->next, > > > tag + > > > >> > 1); > > > >> > + } while (odp_cs(pool->blk_freelist, oldhead, newhead) == 0); > > > >> > + > > > >> > + if (myhead == NULL) { > > > >> > + odp_atomic_inc_u64(&pool->blkempty); > > > >> > + } else { > > > >> > + uint64_t blkcount = > > > >> > + odp_atomic_fetch_sub_u32(&pool->blkcount, > 1); > > > >> > + > > > >> > + /* Check for low watermark condition */ > > > >> > + if (blkcount == pool->low_wm) { > > > >> > + LOCK(&pool->lock); > > > >> > + if (blkcount <= pool->low_wm && > > > >> > + !pool->flags.low_wm_assert) { > > > >> > + pool->flags.low_wm_assert = 1; > > > >> > + > odp_atomic_inc_u64(&pool->low_wm_count); > > > >> > + } > > > >> > + UNLOCK(&pool->lock); > > > >> > + } > > > >> > + odp_atomic_inc_u64(&pool->blkallocs); > > > >> > + } > > > >> > + > > > >> > + return (void *)myhead; > > > >> > } > > > >> > > > > >> > +static inline void ret_blk(struct pool_entry_s *pool, void > *block) > > > >> > +{ > > > >> > + void *oldhead, *myhead, *myblock; > > > >> > + > > > >> > + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, > > > >> > _ODP_MEMMODEL_ACQ); > > > >> > > > > >> > -static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) > > > >> > + do { > > > >> > + size_t tag = odp_tag(oldhead); > > > >> > + myhead = odp_detag(oldhead); > > > >> > + ((odp_buf_blk_t *)block)->next = myhead; > > > >> > + myblock = odp_retag(block, tag + 1); > > > >> > + } while (odp_cs(pool->blk_freelist, oldhead, myblock) == 0); > > > >> > + > > > >> > + odp_atomic_inc_u64(&pool->blkfrees); > > > >> > + uint64_t blkcount = > odp_atomic_fetch_add_u32(&pool->blkcount, > > > 1); > > > >> > > > >> Move uint64_t up with next to all the other globaly declared > variables > > > >> for this function. > > > > > > > > > > > > These are not global variables. > > > > > > Move the declaration to the top of this function next to the "void > > > *oldhead,...." > > > > > > > No thank you. > > Again, see comment above regarding coding style! What's the reason for > disregarding the coding style of the entire project? > > By the way you have started this way of declare variables in the middle > of a function in a lot of places in this commit, so please fix those as > well. > > Cheers, > Anders >
diff --git a/example/generator/odp_generator.c b/example/generator/odp_generator.c index 73b0369..476cbef 100644 --- a/example/generator/odp_generator.c +++ b/example/generator/odp_generator.c @@ -522,11 +522,11 @@ int main(int argc, char *argv[]) odph_linux_pthread_t thread_tbl[MAX_WORKERS]; odp_buffer_pool_t pool; int num_workers; - void *pool_base; int i; int first_core; int core_count; odp_shm_t shm; + odp_buffer_pool_param_t params; /* Init ODP before calling anything else */ if (odp_init_global(NULL, NULL)) { @@ -589,20 +589,13 @@ int main(int argc, char *argv[]) printf("First core: %i\n\n", first_core); /* Create packet pool */ - shm = odp_shm_reserve("shm_packet_pool", - SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); - pool_base = odp_shm_addr(shm); + params.buf_size = SHM_PKT_POOL_BUF_SIZE; + params.buf_align = 0; + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; + params.buf_type = ODP_BUFFER_TYPE_PACKET; - if (pool_base == NULL) { - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); - exit(EXIT_FAILURE); - } + pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, ¶ms); - pool = odp_buffer_pool_create("packet_pool", pool_base, - SHM_PKT_POOL_SIZE, - SHM_PKT_POOL_BUF_SIZE, - ODP_CACHE_LINE_SIZE, - ODP_BUFFER_TYPE_PACKET); if (pool == ODP_BUFFER_POOL_INVALID) { EXAMPLE_ERR("Error: packet pool create failed.\n"); exit(EXIT_FAILURE); diff --git a/example/ipsec/odp_ipsec.c b/example/ipsec/odp_ipsec.c index 76d27c5..f96338c 100644 --- a/example/ipsec/odp_ipsec.c +++ b/example/ipsec/odp_ipsec.c @@ -367,8 +367,7 @@ static void ipsec_init_pre(void) { odp_queue_param_t qparam; - void *pool_base; - odp_shm_t shm; + odp_buffer_pool_param_t params; /* * Create queues @@ -401,16 +400,12 @@ void ipsec_init_pre(void) } /* Create output buffer pool */ - shm = odp_shm_reserve("shm_out_pool", - SHM_OUT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); - - pool_base = odp_shm_addr(shm); + params.buf_size = SHM_OUT_POOL_BUF_SIZE; + params.buf_align = 0; + params.num_bufs = SHM_PKT_POOL_BUF_COUNT; + params.buf_type = ODP_BUFFER_TYPE_PACKET; - out_pool = odp_buffer_pool_create("out_pool", pool_base, - SHM_OUT_POOL_SIZE, - SHM_OUT_POOL_BUF_SIZE, - ODP_CACHE_LINE_SIZE, - ODP_BUFFER_TYPE_PACKET); + out_pool = odp_buffer_pool_create("out_pool", ODP_SHM_NULL, ¶ms); if (ODP_BUFFER_POOL_INVALID == out_pool) { EXAMPLE_ERR("Error: message pool create failed.\n"); @@ -1176,12 +1171,12 @@ main(int argc, char *argv[]) { odph_linux_pthread_t thread_tbl[MAX_WORKERS]; int num_workers; - void *pool_base; int i; int first_core; int core_count; int stream_count; odp_shm_t shm; + odp_buffer_pool_param_t params; /* Init ODP before calling anything else */ if (odp_init_global(NULL, NULL)) { @@ -1241,42 +1236,28 @@ main(int argc, char *argv[]) printf("First core: %i\n\n", first_core); /* Create packet buffer pool */ - shm = odp_shm_reserve("shm_packet_pool", - SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); + params.buf_size = SHM_PKT_POOL_BUF_SIZE; + params.buf_align = 0; + params.num_bufs = SHM_PKT_POOL_BUF_COUNT; + params.buf_type = ODP_BUFFER_TYPE_PACKET; - pool_base = odp_shm_addr(shm); - - if (NULL == pool_base) { - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); - exit(EXIT_FAILURE); - } + pkt_pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, + ¶ms); - pkt_pool = odp_buffer_pool_create("packet_pool", pool_base, - SHM_PKT_POOL_SIZE, - SHM_PKT_POOL_BUF_SIZE, - ODP_CACHE_LINE_SIZE, - ODP_BUFFER_TYPE_PACKET); if (ODP_BUFFER_POOL_INVALID == pkt_pool) { EXAMPLE_ERR("Error: packet pool create failed.\n"); exit(EXIT_FAILURE); } /* Create context buffer pool */ - shm = odp_shm_reserve("shm_ctx_pool", - SHM_CTX_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); - - pool_base = odp_shm_addr(shm); + params.buf_size = SHM_CTX_POOL_BUF_SIZE; + params.buf_align = 0; + params.num_bufs = SHM_CTX_POOL_BUF_COUNT; + params.buf_type = ODP_BUFFER_TYPE_RAW; - if (NULL == pool_base) { - EXAMPLE_ERR("Error: context pool mem alloc failed.\n"); - exit(EXIT_FAILURE); - } + ctx_pool = odp_buffer_pool_create("ctx_pool", ODP_SHM_NULL, + ¶ms); - ctx_pool = odp_buffer_pool_create("ctx_pool", pool_base, - SHM_CTX_POOL_SIZE, - SHM_CTX_POOL_BUF_SIZE, - ODP_CACHE_LINE_SIZE, - ODP_BUFFER_TYPE_RAW); if (ODP_BUFFER_POOL_INVALID == ctx_pool) { EXAMPLE_ERR("Error: context pool create failed.\n"); exit(EXIT_FAILURE); diff --git a/example/l2fwd/odp_l2fwd.c b/example/l2fwd/odp_l2fwd.c index ebac8c5..3c1fd6a 100644 --- a/example/l2fwd/odp_l2fwd.c +++ b/example/l2fwd/odp_l2fwd.c @@ -314,12 +314,12 @@ int main(int argc, char *argv[]) { odph_linux_pthread_t thread_tbl[MAX_WORKERS]; odp_buffer_pool_t pool; - void *pool_base; int i; int first_core; int core_count; odp_pktio_t pktio; odp_shm_t shm; + odp_buffer_pool_param_t params; /* Init ODP before calling anything else */ if (odp_init_global(NULL, NULL)) { @@ -383,20 +383,13 @@ int main(int argc, char *argv[]) printf("First core: %i\n\n", first_core); /* Create packet pool */ - shm = odp_shm_reserve("shm_packet_pool", - SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); - pool_base = odp_shm_addr(shm); + params.buf_size = SHM_PKT_POOL_BUF_SIZE; + params.buf_align = 0; + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; + params.buf_type = ODP_BUFFER_TYPE_PACKET; - if (pool_base == NULL) { - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); - exit(EXIT_FAILURE); - } + pool = odp_buffer_pool_create("packet pool", ODP_SHM_NULL, ¶ms); - pool = odp_buffer_pool_create("packet_pool", pool_base, - SHM_PKT_POOL_SIZE, - SHM_PKT_POOL_BUF_SIZE, - ODP_CACHE_LINE_SIZE, - ODP_BUFFER_TYPE_PACKET); if (pool == ODP_BUFFER_POOL_INVALID) { EXAMPLE_ERR("Error: packet pool create failed.\n"); exit(EXIT_FAILURE); diff --git a/example/odp_example/odp_example.c b/example/odp_example/odp_example.c index 96a2912..8373f12 100644 --- a/example/odp_example/odp_example.c +++ b/example/odp_example/odp_example.c @@ -954,13 +954,13 @@ int main(int argc, char *argv[]) test_args_t args; int num_workers; odp_buffer_pool_t pool; - void *pool_base; odp_queue_t queue; int i, j; int prios; int first_core; odp_shm_t shm; test_globals_t *globals; + odp_buffer_pool_param_t params; printf("\nODP example starts\n\n"); @@ -1042,19 +1042,13 @@ int main(int argc, char *argv[]) /* * Create message pool */ - shm = odp_shm_reserve("msg_pool", - MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); - pool_base = odp_shm_addr(shm); + params.buf_size = sizeof(test_message_t); + params.buf_align = 0; + params.num_bufs = MSG_POOL_SIZE/sizeof(test_message_t); + params.buf_type = ODP_BUFFER_TYPE_RAW; - if (pool_base == NULL) { - EXAMPLE_ERR("Shared memory reserve failed.\n"); - return -1; - } - - pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, - sizeof(test_message_t), - ODP_CACHE_LINE_SIZE, ODP_BUFFER_TYPE_RAW); + pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, ¶ms); if (pool == ODP_BUFFER_POOL_INVALID) { EXAMPLE_ERR("Pool create failed.\n"); diff --git a/example/packet/odp_pktio.c b/example/packet/odp_pktio.c index 1763c84..27318d4 100644 --- a/example/packet/odp_pktio.c +++ b/example/packet/odp_pktio.c @@ -331,11 +331,11 @@ int main(int argc, char *argv[]) odph_linux_pthread_t thread_tbl[MAX_WORKERS]; odp_buffer_pool_t pool; int num_workers; - void *pool_base; int i; int first_core; int core_count; odp_shm_t shm; + odp_buffer_pool_param_t params; /* Init ODP before calling anything else */ if (odp_init_global(NULL, NULL)) { @@ -389,20 +389,13 @@ int main(int argc, char *argv[]) printf("First core: %i\n\n", first_core); /* Create packet pool */ - shm = odp_shm_reserve("shm_packet_pool", - SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); - pool_base = odp_shm_addr(shm); + params.buf_size = SHM_PKT_POOL_BUF_SIZE; + params.buf_align = 0; + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; + params.buf_type = ODP_BUFFER_TYPE_PACKET; - if (pool_base == NULL) { - EXAMPLE_ERR("Error: packet pool mem alloc failed.\n"); - exit(EXIT_FAILURE); - } + pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, ¶ms); - pool = odp_buffer_pool_create("packet_pool", pool_base, - SHM_PKT_POOL_SIZE, - SHM_PKT_POOL_BUF_SIZE, - ODP_CACHE_LINE_SIZE, - ODP_BUFFER_TYPE_PACKET); if (pool == ODP_BUFFER_POOL_INVALID) { EXAMPLE_ERR("Error: packet pool create failed.\n"); exit(EXIT_FAILURE); diff --git a/example/timer/odp_timer_test.c b/example/timer/odp_timer_test.c index 9968bfe..0d6e31a 100644 --- a/example/timer/odp_timer_test.c +++ b/example/timer/odp_timer_test.c @@ -244,12 +244,12 @@ int main(int argc, char *argv[]) test_args_t args; int num_workers; odp_buffer_pool_t pool; - void *pool_base; odp_queue_t queue; int first_core; uint64_t cycles, ns; odp_queue_param_t param; odp_shm_t shm; + odp_buffer_pool_param_t params; printf("\nODP timer example starts\n"); @@ -313,12 +313,13 @@ int main(int argc, char *argv[]) */ shm = odp_shm_reserve("msg_pool", MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); - pool_base = odp_shm_addr(shm); - pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, - 0, - ODP_CACHE_LINE_SIZE, - ODP_BUFFER_TYPE_TIMEOUT); + params.buf_size = 0; + params.buf_align = 0; + params.num_bufs = MSG_POOL_SIZE; + params.buf_type = ODP_BUFFER_TYPE_TIMEOUT; + + pool = odp_buffer_pool_create("msg_pool", shm, ¶ms); if (pool == ODP_BUFFER_POOL_INVALID) { EXAMPLE_ERR("Pool create failed.\n"); diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h b/platform/linux-generic/include/api/odp_buffer_pool.h index 30b83e0..7022daa 100644 --- a/platform/linux-generic/include/api/odp_buffer_pool.h +++ b/platform/linux-generic/include/api/odp_buffer_pool.h @@ -36,32 +36,101 @@ extern "C" { #define ODP_BUFFER_POOL_INVALID 0 /** + * Buffer pool parameters + * Used to communicate buffer pool creation options. + */ +typedef struct odp_buffer_pool_param_t { + size_t buf_size; /**< Buffer size in bytes. The maximum + number of bytes application will + store in each buffer. */ + size_t buf_align; /**< Minimum buffer alignment in bytes. + Valid values are powers of two. Use 0 + for default alignment. Default will + always be a multiple of 8. */ + uint32_t num_bufs; /**< Number of buffers in the pool */ + int buf_type; /**< Buffer type */ +} odp_buffer_pool_param_t; + +/** * Create a buffer pool + * This routine is used to create a buffer pool. It take three + * arguments: the optional name of the pool to be created, an optional shared + * memory handle, and a parameter struct that describes the pool to be + * created. If a name is not specified the result is an anonymous pool that + * cannot be referenced by odp_buffer_pool_lookup(). * - * @param name Name of the pool (max ODP_BUFFER_POOL_NAME_LEN - 1 chars) - * @param base_addr Pool base address - * @param size Pool size in bytes - * @param buf_size Buffer size in bytes - * @param buf_align Minimum buffer alignment - * @param buf_type Buffer type + * @param[in] name Name of the pool, max ODP_BUFFER_POOL_NAME_LEN-1 chars. + * May be specified as NULL for anonymous pools. * - * @return Buffer pool handle + * @param[in] shm The shared memory object in which to create the pool. + * Use ODP_SHM_NULL to reserve default memory type + * for the buffer type. + * + * @param[in] params Buffer pool parameters. + * + * @return Buffer pool handle or ODP_BUFFER_POOL_INVALID if call failed. */ + odp_buffer_pool_t odp_buffer_pool_create(const char *name, - void *base_addr, uint64_t size, - size_t buf_size, size_t buf_align, - int buf_type); + odp_shm_t shm, + odp_buffer_pool_param_t *params); +/** + * Destroy a buffer pool previously created by odp_buffer_pool_create() + * + * @param[in] pool Handle of the buffer pool to be destroyed + * + * @return 0 on Success, -1 on Failure. + * + * @note This routine destroys a previously created buffer pool. This call + * does not destroy any shared memory object passed to + * odp_buffer_pool_create() used to store the buffer pool contents. The caller + * takes responsibility for that. If no shared memory object was passed as + * part of the create call, then this routine will destroy any internal shared + * memory objects associated with the buffer pool. Results are undefined if + * an attempt is made to destroy a buffer pool that contains allocated or + * otherwise active buffers. + */ +int odp_buffer_pool_destroy(odp_buffer_pool_t pool); /** * Find a buffer pool by name * - * @param name Name of the pool + * @param[in] name Name of the pool * * @return Buffer pool handle, or ODP_BUFFER_POOL_INVALID if not found. + * + * @note This routine cannot be used to look up an anonymous pool (one created + * with no name). */ odp_buffer_pool_t odp_buffer_pool_lookup(const char *name); +/** + * Buffer pool information struct + * Used to get information about a buffer pool. + */ +typedef struct odp_buffer_pool_info_t { + const char *name; /**< pool name */ + odp_buffer_pool_param_t params; /**< pool parameters */ +} odp_buffer_pool_info_t; + +/** + * Retrieve information about a buffer pool + * + * @param[in] pool Buffer pool handle + * + * @param[out] shm Recieves odp_shm_t supplied by caller at + * pool creation, or ODP_SHM_NULL if the + * pool is managed internally. + * + * @param[out] info Receives an odp_buffer_pool_info_t object + * that describes the pool. + * + * @return 0 on success, -1 if info could not be retrieved. + */ + +int odp_buffer_pool_info(odp_buffer_pool_t pool, odp_shm_t *shm, + odp_buffer_pool_info_t *info); /** * Print buffer pool info diff --git a/platform/linux-generic/include/api/odp_config.h b/platform/linux-generic/include/api/odp_config.h index 906897c..1226d37 100644 --- a/platform/linux-generic/include/api/odp_config.h +++ b/platform/linux-generic/include/api/odp_config.h @@ -49,6 +49,16 @@ extern "C" { #define ODP_CONFIG_PKTIO_ENTRIES 64 /** + * Segment size to use - + */ +#define ODP_CONFIG_BUF_SEG_SIZE (512*3) + +/** + * Maximum buffer size supported + */ +#define ODP_CONFIG_BUF_MAX_SIZE (ODP_CONFIG_BUF_SEG_SIZE*7) + +/** * @} */ diff --git a/platform/linux-generic/include/api/odp_platform_types.h b/platform/linux-generic/include/api/odp_platform_types.h index 4db47d3..b9b3aea 100644 --- a/platform/linux-generic/include/api/odp_platform_types.h +++ b/platform/linux-generic/include/api/odp_platform_types.h @@ -65,6 +65,15 @@ typedef uint32_t odp_pktio_t; #define ODP_PKTIO_ANY ((odp_pktio_t)~0) /** + * ODP shared memory block + */ +typedef uint32_t odp_shm_t; + +/** Invalid shared memory block */ +#define ODP_SHM_INVALID 0 +#define ODP_SHM_NULL ODP_SHM_INVALID /**< Synonym for buffer pool use */ + +/** * @} */ diff --git a/platform/linux-generic/include/api/odp_shared_memory.h b/platform/linux-generic/include/api/odp_shared_memory.h index 26e208b..f70db5a 100644 --- a/platform/linux-generic/include/api/odp_shared_memory.h +++ b/platform/linux-generic/include/api/odp_shared_memory.h @@ -20,6 +20,7 @@ extern "C" { #include <odp_std_types.h> +#include <odp_platform_types.h> /** @defgroup odp_shared_memory ODP SHARED MEMORY * Operations on shared memory. @@ -38,15 +39,6 @@ extern "C" { #define ODP_SHM_PROC 0x2 /**< Share with external processes */ /** - * ODP shared memory block - */ -typedef uint32_t odp_shm_t; - -/** Invalid shared memory block */ -#define ODP_SHM_INVALID 0 - - -/** * Shared memory block info */ typedef struct odp_shm_info_t { diff --git a/platform/linux-generic/include/odp_buffer_inlines.h b/platform/linux-generic/include/odp_buffer_inlines.h new file mode 100644 index 0000000..f33b41d --- /dev/null +++ b/platform/linux-generic/include/odp_buffer_inlines.h @@ -0,0 +1,157 @@ +/* Copyright (c) 2014, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @file + * + * Inline functions for ODP buffer mgmt routines - implementation internal + */ + +#ifndef ODP_BUFFER_INLINES_H_ +#define ODP_BUFFER_INLINES_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +static inline odp_buffer_t odp_buffer_encode_handle(odp_buffer_hdr_t *hdr) +{ + odp_buffer_bits_t handle; + uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl); + struct pool_entry_s *pool = get_pool_entry(pool_id); + + handle.pool_id = pool_id; + handle.index = ((uint8_t *)hdr - pool->pool_base_addr) / + ODP_CACHE_LINE_SIZE; + handle.seg = 0; + + return handle.u32; +} + +static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) +{ + odp_buffer_t hdl = odp_buffer_encode_handle(hdr); + if (hdl != hdr->handle.handle) { + ODP_DBG("buf %p should have handle %x but is cached as %x\n", + hdr, hdl, hdr->handle.handle); + hdr->handle.handle = hdl; + } + return hdr->handle.handle; +} + +static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) +{ + odp_buffer_bits_t handle; + uint32_t pool_id; + uint32_t index; + struct pool_entry_s *pool; + + handle.u32 = buf; + pool_id = handle.pool_id; + index = handle.index; + +#ifdef POOL_ERROR_CHECK + if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { + ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); + return NULL; + } +#endif + + pool = get_pool_entry(pool_id); + +#ifdef POOL_ERROR_CHECK + if (odp_unlikely(index > pool->params.num_bufs - 1)) { + ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); + return NULL; + } +#endif + + return (odp_buffer_hdr_t *)(void *) + (pool->pool_base_addr + (index * ODP_CACHE_LINE_SIZE)); +} + +static inline uint32_t odp_buffer_refcount(odp_buffer_hdr_t *buf) +{ + return odp_atomic_load_u32(&buf->ref_count); +} + +static inline uint32_t odp_buffer_incr_refcount(odp_buffer_hdr_t *buf, + uint32_t val) +{ + return odp_atomic_fetch_add_u32(&buf->ref_count, val) + val; +} + +static inline uint32_t odp_buffer_decr_refcount(odp_buffer_hdr_t *buf, + uint32_t val) +{ + uint32_t tmp; + + tmp = odp_atomic_fetch_sub_u32(&buf->ref_count, val); + + if (tmp < val) { + odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp); + return 0; + } else { + return tmp - val; + } +} + +static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) +{ + odp_buffer_bits_t handle; + odp_buffer_hdr_t *buf_hdr; + handle.u32 = buf; + + /* For buffer handles, segment index must be 0 */ + if (handle.seg != 0) + return NULL; + + pool_entry_t *pool = odp_pool_to_entry(handle.pool_id); + + /* If pool not created, handle is invalid */ + if (pool->s.pool_shm == ODP_SHM_INVALID) + return NULL; + + uint32_t buf_stride = pool->s.buf_stride / ODP_CACHE_LINE_SIZE; + + /* A valid buffer index must be on stride, and must be in range */ + if ((handle.index % buf_stride != 0) || + ((uint32_t)(handle.index / buf_stride) >= pool->s.params.num_bufs)) + return NULL; + + buf_hdr = (odp_buffer_hdr_t *)(void *) + (pool->s.pool_base_addr + + (handle.index * ODP_CACHE_LINE_SIZE)); + + /* Handle is valid, so buffer is valid if it is allocated */ + if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0) + return NULL; + else + return buf_hdr; +} + +int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); + +static inline void *buffer_map(odp_buffer_hdr_t *buf, + size_t offset, + size_t *seglen, + size_t limit) +{ + int seg_index = offset / buf->segsize; + int seg_offset = offset % buf->segsize; + size_t buf_left = limit - offset; + + *seglen = buf_left < buf->segsize ? + buf_left : buf->segsize - seg_offset; + + return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]); +} + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index 0027bfc..29666db 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -24,99 +24,118 @@ extern "C" { #include <odp_buffer.h> #include <odp_debug.h> #include <odp_align.h> - -/* TODO: move these to correct files */ - -typedef uint64_t odp_phys_addr_t; - -#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) -#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) - -#define ODP_BUFS_PER_CHUNK 16 -#define ODP_BUFS_PER_SCATTER 4 - -#define ODP_BUFFER_TYPE_CHUNK 0xffff - +#include <odp_config.h> +#include <odp_byteorder.h> +#include <odp_thread.h> + + +#define ODP_BUFFER_MAX_SEG (ODP_CONFIG_BUF_MAX_SIZE/ODP_CONFIG_BUF_SEG_SIZE) +#define ODP_MAX_INLINE_BUF (sizeof(void *) * (ODP_BUFFER_MAX_SEG - 1)) + +ODP_STATIC_ASSERT((ODP_CONFIG_BUF_SEG_SIZE % ODP_CACHE_LINE_SIZE) == 0, + "ODP Segment size must be a multiple of cache line size"); + +#define ODP_SEGBITS(x) \ + ((x) < 2 ? 1 : \ + ((x) < 4 ? 2 : \ + ((x) < 8 ? 3 : \ + ((x) < 16 ? 4 : \ + ((x) < 32 ? 5 : \ + ((x) < 64 ? 6 : \ + ((x) < 128 ? 7 : \ + ((x) < 256 ? 8 : \ + ((x) < 512 ? 9 : \ + ((x) < 1024 ? 10 : \ + ((x) < 2048 ? 11 : \ + ((x) < 4096 ? 12 : \ + (0/0))))))))))))) + +ODP_STATIC_ASSERT(ODP_SEGBITS(ODP_BUFFER_MAX_SEG) < + ODP_SEGBITS(ODP_CACHE_LINE_SIZE), + "Number of segments must not exceed log of cache line size"); #define ODP_BUFFER_POOL_BITS 4 -#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS) +#define ODP_BUFFER_SEG_BITS ODP_SEGBITS(ODP_CACHE_LINE_SIZE) +#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS - ODP_BUFFER_SEG_BITS) +#define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS + ODP_BUFFER_INDEX_BITS) #define ODP_BUFFER_MAX_POOLS (1 << ODP_BUFFER_POOL_BITS) #define ODP_BUFFER_MAX_BUFFERS (1 << ODP_BUFFER_INDEX_BITS) +#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) +#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1) + typedef union odp_buffer_bits_t { uint32_t u32; odp_buffer_t handle; struct { +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN uint32_t pool_id:ODP_BUFFER_POOL_BITS; uint32_t index:ODP_BUFFER_INDEX_BITS; + uint32_t seg:ODP_BUFFER_SEG_BITS; +#else + uint32_t seg:ODP_BUFFER_SEG_BITS; + uint32_t index:ODP_BUFFER_INDEX_BITS; + uint32_t pool_id:ODP_BUFFER_POOL_BITS; +#endif }; -} odp_buffer_bits_t; + struct { +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; +#else + uint32_t pfxseg:ODP_BUFFER_SEG_BITS; + uint32_t prefix:ODP_BUFFER_PREFIX_BITS; +#endif + }; +} odp_buffer_bits_t; /* forward declaration */ struct odp_buffer_hdr_t; - -/* - * Scatter/gather list of buffers - */ -typedef struct odp_buffer_scatter_t { - /* buffer pointers */ - struct odp_buffer_hdr_t *buf[ODP_BUFS_PER_SCATTER]; - int num_bufs; /* num buffers */ - int pos; /* position on the list */ - size_t total_len; /* Total length */ -} odp_buffer_scatter_t; - - -/* - * Chunk of buffers (in single pool) - */ -typedef struct odp_buffer_chunk_t { - uint32_t num_bufs; /* num buffers */ - uint32_t buf_index[ODP_BUFS_PER_CHUNK]; /* buffers */ -} odp_buffer_chunk_t; - - /* Common buffer header */ typedef struct odp_buffer_hdr_t { struct odp_buffer_hdr_t *next; /* next buf in a list */ + int allocator; /* allocating thread id */ odp_buffer_bits_t handle; /* handle */ - odp_phys_addr_t phys_addr; /* physical data start address */ - void *addr; /* virtual data start address */ - uint32_t index; /* buf index in the pool */ + union { + uint32_t all; + struct { + uint32_t zeroized:1; /* Zeroize buf data on free */ + uint32_t hdrdata:1; /* Data is in buffer hdr */ + }; + } flags; + int type; /* buffer type */ size_t size; /* max data size */ - size_t cur_offset; /* current offset */ odp_atomic_u32_t ref_count; /* reference count */ - odp_buffer_scatter_t scatter; /* Scatter/gather list */ - int type; /* type of next header */ odp_buffer_pool_t pool_hdl; /* buffer pool handle */ - + union { + void *buf_ctx; /* user context */ + void *udata_addr; /* user metadata addr */ + }; + size_t udata_size; /* size of user metadata */ + uint32_t segcount; /* segment count */ + uint32_t segsize; /* segment size */ + void *addr[ODP_BUFFER_MAX_SEG]; /* block addrs */ } odp_buffer_hdr_t; -/* Ensure next header starts from 8 byte align */ -ODP_STATIC_ASSERT((sizeof(odp_buffer_hdr_t) % 8) == 0, "ODP_BUFFER_HDR_T__SIZE_ERROR"); +typedef struct odp_buffer_hdr_stride { + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t))]; +} odp_buffer_hdr_stride; +typedef struct odp_buf_blk_t { + struct odp_buf_blk_t *next; + struct odp_buf_blk_t *prev; +} odp_buf_blk_t; /* Raw buffer header */ typedef struct { odp_buffer_hdr_t buf_hdr; /* common buffer header */ - uint8_t buf_data[]; /* start of buffer data area */ } odp_raw_buffer_hdr_t; - -/* Chunk header */ -typedef struct odp_buffer_chunk_hdr_t { - odp_buffer_hdr_t buf_hdr; - odp_buffer_chunk_t chunk; -} odp_buffer_chunk_hdr_t; - - -int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf); - -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t buf_src); - +/* Forward declarations */ +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool, size_t size); #ifdef __cplusplus } diff --git a/platform/linux-generic/include/odp_buffer_pool_internal.h b/platform/linux-generic/include/odp_buffer_pool_internal.h index e0210bd..cd58f91 100644 --- a/platform/linux-generic/include/odp_buffer_pool_internal.h +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h @@ -25,6 +25,35 @@ extern "C" { #include <odp_hints.h> #include <odp_config.h> #include <odp_debug.h> +#include <odp_shared_memory.h> +#include <odp_atomic.h> +#include <odp_atomic_internal.h> +#include <string.h> + +/** + * Buffer initialization routine prototype + * + * @note Routines of this type MAY be passed as part of the + * _odp_buffer_pool_init_t structure to be called whenever a + * buffer is allocated to initialize the user metadata + * associated with that buffer. + */ +typedef void (_odp_buf_init_t)(odp_buffer_t buf, void *buf_init_arg); + +/** + * Buffer pool initialization parameters + * + * @param[in] udata_size Size of the user metadata for each buffer + * @param[in] buf_init Function pointer to be called to initialize the + * user metadata for each buffer in the pool. + * @param[in] buf_init_arg Argument to be passed to buf_init(). + * + */ +typedef struct _odp_buffer_pool_init_t { + size_t udata_size; /**< Size of user metadata for each buffer */ + _odp_buf_init_t *buf_init; /**< Buffer initialization routine to use */ + void *buf_init_arg; /**< Argument to be passed to buf_init() */ +} _odp_buffer_pool_init_t; /**< Type of buffer initialization struct */ /* Use ticketlock instead of spinlock */ #define POOL_USE_TICKETLOCK @@ -39,6 +68,17 @@ extern "C" { #include <odp_spinlock.h> #endif +#ifdef POOL_USE_TICKETLOCK +#include <odp_ticketlock.h> +#define LOCK(a) odp_ticketlock_lock(a) +#define UNLOCK(a) odp_ticketlock_unlock(a) +#define LOCK_INIT(a) odp_ticketlock_init(a) +#else +#include <odp_spinlock.h> +#define LOCK(a) odp_spinlock_lock(a) +#define UNLOCK(a) odp_spinlock_unlock(a) +#define LOCK_INIT(a) odp_spinlock_init(a) +#endif struct pool_entry_s { #ifdef POOL_USE_TICKETLOCK @@ -47,66 +87,224 @@ struct pool_entry_s { odp_spinlock_t lock ODP_ALIGNED_CACHE; #endif - odp_buffer_chunk_hdr_t *head; - uint64_t free_bufs; char name[ODP_BUFFER_POOL_NAME_LEN]; - - odp_buffer_pool_t pool_hdl ODP_ALIGNED_CACHE; - uintptr_t buf_base; - size_t buf_size; - size_t buf_offset; - uint64_t num_bufs; - void *pool_base_addr; - uint64_t pool_size; - size_t user_size; - size_t user_align; - int buf_type; - size_t hdr_size; + odp_buffer_pool_param_t params; + _odp_buffer_pool_init_t init_params; + odp_buffer_pool_t pool_hdl; + odp_shm_t pool_shm; + union { + uint32_t all; + struct { + uint32_t has_name:1; + uint32_t user_supplied_shm:1; + uint32_t unsegmented:1; + uint32_t zeroized:1; + uint32_t quiesced:1; + uint32_t low_wm_assert:1; + uint32_t predefined:1; + }; + } flags; + uint8_t *pool_base_addr; + size_t pool_size; + uint32_t buf_stride; + _odp_atomic_ptr_t buf_freelist; + _odp_atomic_ptr_t blk_freelist; + odp_atomic_u32_t bufcount; + odp_atomic_u32_t blkcount; + odp_atomic_u64_t bufallocs; + odp_atomic_u64_t buffrees; + odp_atomic_u64_t blkallocs; + odp_atomic_u64_t blkfrees; + odp_atomic_u64_t bufempty; + odp_atomic_u64_t blkempty; + odp_atomic_u64_t high_wm_count; + odp_atomic_u64_t low_wm_count; + size_t seg_size; + size_t high_wm; + size_t low_wm; + size_t headroom; + size_t tailroom; }; +typedef union pool_entry_u { + struct pool_entry_s s; + + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct pool_entry_s))]; +} pool_entry_t; extern void *pool_entry_ptr[]; +#if defined(ODP_CONFIG_SECURE_POOLS) && (ODP_CONFIG_SECURE_POOLS == 1) +#define buffer_is_secure(buf) (buf->flags.zeroized) +#define pool_is_secure(pool) (pool->flags.zeroized) +#else +#define buffer_is_secure(buf) 0 +#define pool_is_secure(pool) 0 +#endif + +#define TAG_ALIGN ((size_t)16) -static inline void *get_pool_entry(uint32_t pool_id) +#define odp_cs(ptr, old, new) \ + _odp_atomic_ptr_cmp_xchg_strong(&ptr, (void **)&old, (void *)new, \ + _ODP_MEMMODEL_SC, \ + _ODP_MEMMODEL_SC) + +/* Helper functions for pointer tagging to avoid ABA race conditions */ +#define odp_tag(ptr) \ + (((size_t)ptr) & (TAG_ALIGN - 1)) + +#define odp_detag(ptr) \ + ((typeof(ptr))(((size_t)ptr) & -TAG_ALIGN)) + +#define odp_retag(ptr, tag) \ + ((typeof(ptr))(((size_t)ptr) | odp_tag(tag))) + + +static inline void *get_blk(struct pool_entry_s *pool) { - return pool_entry_ptr[pool_id]; + void *oldhead, *myhead, *newhead; + + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, _ODP_MEMMODEL_ACQ); + + do { + size_t tag = odp_tag(oldhead); + myhead = odp_detag(oldhead); + if (myhead == NULL) + break; + newhead = odp_retag(((odp_buf_blk_t *)myhead)->next, tag + 1); + } while (odp_cs(pool->blk_freelist, oldhead, newhead) == 0); + + if (myhead == NULL) { + odp_atomic_inc_u64(&pool->blkempty); + } else { + uint64_t blkcount = + odp_atomic_fetch_sub_u32(&pool->blkcount, 1); + + /* Check for low watermark condition */ + if (blkcount == pool->low_wm) { + LOCK(&pool->lock); + if (blkcount <= pool->low_wm && + !pool->flags.low_wm_assert) { + pool->flags.low_wm_assert = 1; + odp_atomic_inc_u64(&pool->low_wm_count); + } + UNLOCK(&pool->lock); + } + odp_atomic_inc_u64(&pool->blkallocs); + } + + return (void *)myhead; } +static inline void ret_blk(struct pool_entry_s *pool, void *block) +{ + void *oldhead, *myhead, *myblock; + + oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, _ODP_MEMMODEL_ACQ); -static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) + do { + size_t tag = odp_tag(oldhead); + myhead = odp_detag(oldhead); + ((odp_buf_blk_t *)block)->next = myhead; + myblock = odp_retag(block, tag + 1); + } while (odp_cs(pool->blk_freelist, oldhead, myblock) == 0); + + odp_atomic_inc_u64(&pool->blkfrees); + uint64_t blkcount = odp_atomic_fetch_add_u32(&pool->blkcount, 1); + + /* Check if low watermark condition should be deasserted */ + if (blkcount == pool->high_wm) { + LOCK(&pool->lock); + if (blkcount == pool->high_wm && pool->flags.low_wm_assert) { + pool->flags.low_wm_assert = 0; + odp_atomic_inc_u64(&pool->high_wm_count); + } + UNLOCK(&pool->lock); + } +} + +static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool) { - odp_buffer_bits_t handle; - uint32_t pool_id; - uint32_t index; - struct pool_entry_s *pool; - odp_buffer_hdr_t *hdr; - - handle.u32 = buf; - pool_id = handle.pool_id; - index = handle.index; - -#ifdef POOL_ERROR_CHECK - if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) { - ODP_ERR("odp_buf_to_hdr: Bad pool id\n"); - return NULL; + odp_buffer_hdr_t *oldhead, *myhead, *newhead; + + oldhead = _odp_atomic_ptr_load(&pool->buf_freelist, _ODP_MEMMODEL_ACQ); + + do { + size_t tag = odp_tag(oldhead); + myhead = odp_detag(oldhead); + if (myhead == NULL) + break; + newhead = odp_retag(myhead->next, tag + 1); + } while (odp_cs(pool->buf_freelist, oldhead, newhead) == 0); + + if (myhead != NULL) { + myhead->next = myhead; + myhead->allocator = odp_thread_id(); + odp_atomic_inc_u32(&pool->bufcount); + odp_atomic_inc_u64(&pool->bufallocs); + } else { + odp_atomic_inc_u64(&pool->bufempty); } -#endif - pool = get_pool_entry(pool_id); + return (void *)myhead; +} + +static inline void ret_buf(struct pool_entry_s *pool, odp_buffer_hdr_t *buf) +{ + odp_buffer_hdr_t *oldhead, *myhead, *mybuf; -#ifdef POOL_ERROR_CHECK - if (odp_unlikely(index > pool->num_bufs - 1)) { - ODP_ERR("odp_buf_to_hdr: Bad buffer index\n"); - return NULL; + if (!buf->flags.hdrdata && buf->type != ODP_BUFFER_TYPE_RAW) { + while (buf->segcount > 0) { + if (buffer_is_secure(buf) || pool_is_secure(pool)) + memset(buf->addr[buf->segcount - 1], + 0, buf->segsize); + ret_blk(pool, buf->addr[--buf->segcount]); + } + buf->size = 0; } -#endif - hdr = (odp_buffer_hdr_t *)(pool->buf_base + index * pool->buf_size); + oldhead = _odp_atomic_ptr_load(&pool->buf_freelist, _ODP_MEMMODEL_ACQ); - return hdr; + do { + size_t tag = odp_tag(oldhead); + myhead = odp_detag(oldhead); + buf->next = myhead; + mybuf = odp_retag(buf, tag + 1); + } while (odp_cs(pool->buf_freelist, oldhead, mybuf) == 0); + + odp_atomic_dec_u32(&pool->bufcount); + odp_atomic_inc_u64(&pool->buffrees); +} + +static inline odp_buffer_pool_t pool_index_to_handle(uint32_t pool_id) +{ + return pool_id + 1; } +static inline uint32_t pool_handle_to_index(odp_buffer_pool_t pool_hdl) +{ + return pool_hdl - 1; +} + +static inline void *get_pool_entry(uint32_t pool_id) +{ + return pool_entry_ptr[pool_id]; +} + +static inline pool_entry_t *odp_pool_to_entry(odp_buffer_pool_t pool) +{ + return (pool_entry_t *)get_pool_entry(pool_handle_to_index(pool)); +} + +static inline pool_entry_t *odp_buf_to_pool(odp_buffer_hdr_t *buf) +{ + return odp_pool_to_entry(buf->pool_hdl); +} + +static inline size_t odp_buffer_pool_segment_size(odp_buffer_pool_t pool) +{ + return odp_pool_to_entry(pool)->s.seg_size; +} #ifdef __cplusplus } diff --git a/platform/linux-generic/include/odp_packet_internal.h b/platform/linux-generic/include/odp_packet_internal.h index 49c59b2..f34a83d 100644 --- a/platform/linux-generic/include/odp_packet_internal.h +++ b/platform/linux-generic/include/odp_packet_internal.h @@ -22,6 +22,7 @@ extern "C" { #include <odp_debug.h> #include <odp_buffer_internal.h> #include <odp_buffer_pool_internal.h> +#include <odp_buffer_inlines.h> #include <odp_packet.h> #include <odp_packet_io.h> @@ -92,7 +93,8 @@ typedef union { }; } output_flags_t; -ODP_STATIC_ASSERT(sizeof(output_flags_t) == sizeof(uint32_t), "OUTPUT_FLAGS_SIZE_ERROR"); +ODP_STATIC_ASSERT(sizeof(output_flags_t) == sizeof(uint32_t), + "OUTPUT_FLAGS_SIZE_ERROR"); /** * Internal Packet header @@ -105,25 +107,23 @@ typedef struct { error_flags_t error_flags; output_flags_t output_flags; - uint32_t frame_offset; /**< offset to start of frame, even on error */ uint32_t l2_offset; /**< offset to L2 hdr, e.g. Eth */ uint32_t l3_offset; /**< offset to L3 hdr, e.g. IPv4, IPv6 */ uint32_t l4_offset; /**< offset to L4 hdr (TCP, UDP, SCTP, also ICMP) */ uint32_t frame_len; + uint32_t headroom; + uint32_t tailroom; uint64_t user_ctx; /* user context */ odp_pktio_t input; - - uint32_t pad; - uint8_t buf_data[]; /* start of buffer data area */ } odp_packet_hdr_t; -ODP_STATIC_ASSERT(sizeof(odp_packet_hdr_t) == ODP_OFFSETOF(odp_packet_hdr_t, buf_data), - "ODP_PACKET_HDR_T__SIZE_ERR"); -ODP_STATIC_ASSERT(sizeof(odp_packet_hdr_t) % sizeof(uint64_t) == 0, - "ODP_PACKET_HDR_T__SIZE_ERR2"); +typedef struct odp_packet_hdr_stride { + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_packet_hdr_t))]; +} odp_packet_hdr_stride; + /** * Return the packet header @@ -138,6 +138,38 @@ static inline odp_packet_hdr_t *odp_packet_hdr(odp_packet_t pkt) */ void odp_packet_parse(odp_packet_t pkt, size_t len, size_t l2_offset); +/** + * Initialize packet buffer + */ +static inline void packet_init(pool_entry_t *pool, + odp_packet_hdr_t *pkt_hdr, + size_t size) +{ + /* + * Reset parser metadata. Note that we clear via memset to make + * this routine indepenent of any additional adds to packet metadata. + */ + const size_t start_offset = ODP_FIELD_SIZEOF(odp_packet_hdr_t, buf_hdr); + uint8_t *start; + size_t len; + + start = (uint8_t *)pkt_hdr + start_offset; + len = sizeof(odp_packet_hdr_t) - start_offset; + memset(start, 0, len); + + /* + * Packet headroom is set from the pool's headroom + * Packet tailroom is rounded up to fill the last + * segment occupied by the allocated length. + */ + pkt_hdr->frame_len = size; + pkt_hdr->headroom = pool->s.headroom; + pkt_hdr->tailroom = + (pool->s.seg_size * pkt_hdr->buf_hdr.segcount) - + (pool->s.headroom + size); +} + + #ifdef __cplusplus } #endif diff --git a/platform/linux-generic/include/odp_timer_internal.h b/platform/linux-generic/include/odp_timer_internal.h index ad28f53..2ff36ce 100644 --- a/platform/linux-generic/include/odp_timer_internal.h +++ b/platform/linux-generic/include/odp_timer_internal.h @@ -51,14 +51,9 @@ typedef struct odp_timeout_hdr_t { uint8_t buf_data[]; } odp_timeout_hdr_t; - - -ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) == - ODP_OFFSETOF(odp_timeout_hdr_t, buf_data), - "ODP_TIMEOUT_HDR_T__SIZE_ERR"); - -ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) % sizeof(uint64_t) == 0, - "ODP_TIMEOUT_HDR_T__SIZE_ERR2"); +typedef struct odp_timeout_hdr_stride { + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_timeout_hdr_t))]; +} odp_timeout_hdr_stride; /** diff --git a/platform/linux-generic/odp_buffer.c b/platform/linux-generic/odp_buffer.c index bcbb99a..366190c 100644 --- a/platform/linux-generic/odp_buffer.c +++ b/platform/linux-generic/odp_buffer.c @@ -5,8 +5,9 @@ */ #include <odp_buffer.h> -#include <odp_buffer_internal.h> #include <odp_buffer_pool_internal.h> +#include <odp_buffer_internal.h> +#include <odp_buffer_inlines.h> #include <string.h> #include <stdio.h> @@ -16,7 +17,7 @@ void *odp_buffer_addr(odp_buffer_t buf) { odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf); - return hdr->addr; + return hdr->addr[0]; } @@ -38,11 +39,7 @@ int odp_buffer_type(odp_buffer_t buf) int odp_buffer_is_valid(odp_buffer_t buf) { - odp_buffer_bits_t handle; - - handle.u32 = buf; - - return (handle.index != ODP_BUFFER_INVALID_INDEX); + return validate_buf(buf) != NULL; } @@ -63,28 +60,14 @@ int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf) len += snprintf(&str[len], n-len, " pool %i\n", hdr->pool_hdl); len += snprintf(&str[len], n-len, - " index %"PRIu32"\n", hdr->index); - len += snprintf(&str[len], n-len, - " phy_addr %"PRIu64"\n", hdr->phys_addr); - len += snprintf(&str[len], n-len, " addr %p\n", hdr->addr); len += snprintf(&str[len], n-len, " size %zu\n", hdr->size); len += snprintf(&str[len], n-len, - " cur_offset %zu\n", hdr->cur_offset); - len += snprintf(&str[len], n-len, " ref_count %i\n", odp_atomic_load_u32(&hdr->ref_count)); len += snprintf(&str[len], n-len, " type %i\n", hdr->type); - len += snprintf(&str[len], n-len, - " Scatter list\n"); - len += snprintf(&str[len], n-len, - " num_bufs %i\n", hdr->scatter.num_bufs); - len += snprintf(&str[len], n-len, - " pos %i\n", hdr->scatter.pos); - len += snprintf(&str[len], n-len, - " total_len %zu\n", hdr->scatter.total_len); return len; } @@ -101,9 +84,3 @@ void odp_buffer_print(odp_buffer_t buf) ODP_PRINT("\n%s\n", str); } - -void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t buf_src) -{ - (void)buf_dst; - (void)buf_src; -} diff --git a/platform/linux-generic/odp_buffer_pool.c b/platform/linux-generic/odp_buffer_pool.c index 6a0a6b2..f545090 100644 --- a/platform/linux-generic/odp_buffer_pool.c +++ b/platform/linux-generic/odp_buffer_pool.c @@ -6,8 +6,9 @@ #include <odp_std_types.h> #include <odp_buffer_pool.h> -#include <odp_buffer_pool_internal.h> #include <odp_buffer_internal.h> +#include <odp_buffer_pool_internal.h> +#include <odp_buffer_inlines.h> #include <odp_packet_internal.h> #include <odp_timer_internal.h> #include <odp_shared_memory.h> @@ -16,57 +17,35 @@ #include <odp_config.h> #include <odp_hints.h> #include <odp_debug.h> +#include <odp_atomic_internal.h> #include <string.h> #include <stdlib.h> -#ifdef POOL_USE_TICKETLOCK -#include <odp_ticketlock.h> -#define LOCK(a) odp_ticketlock_lock(a) -#define UNLOCK(a) odp_ticketlock_unlock(a) -#define LOCK_INIT(a) odp_ticketlock_init(a) -#else -#include <odp_spinlock.h> -#define LOCK(a) odp_spinlock_lock(a) -#define UNLOCK(a) odp_spinlock_unlock(a) -#define LOCK_INIT(a) odp_spinlock_init(a) -#endif - - #if ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS #error ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS #endif -#define NULL_INDEX ((uint32_t)-1) -union buffer_type_any_u { +typedef union buffer_type_any_u { odp_buffer_hdr_t buf; odp_packet_hdr_t pkt; odp_timeout_hdr_t tmo; -}; - -ODP_STATIC_ASSERT((sizeof(union buffer_type_any_u) % 8) == 0, - "BUFFER_TYPE_ANY_U__SIZE_ERR"); +} odp_anybuf_t; /* Any buffer type header */ typedef struct { union buffer_type_any_u any_hdr; /* any buffer type */ - uint8_t buf_data[]; /* start of buffer data area */ } odp_any_buffer_hdr_t; - -typedef union pool_entry_u { - struct pool_entry_s s; - - uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct pool_entry_s))]; - -} pool_entry_t; +typedef struct odp_any_hdr_stride { + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_any_buffer_hdr_t))]; +} odp_any_hdr_stride; typedef struct pool_table_t { pool_entry_t pool[ODP_CONFIG_BUFFER_POOLS]; - } pool_table_t; @@ -77,38 +56,6 @@ static pool_table_t *pool_tbl; void *pool_entry_ptr[ODP_CONFIG_BUFFER_POOLS]; -static __thread odp_buffer_chunk_hdr_t *local_chunk[ODP_CONFIG_BUFFER_POOLS]; - - -static inline odp_buffer_pool_t pool_index_to_handle(uint32_t pool_id) -{ - return pool_id + 1; -} - - -static inline uint32_t pool_handle_to_index(odp_buffer_pool_t pool_hdl) -{ - return pool_hdl -1; -} - - -static inline void set_handle(odp_buffer_hdr_t *hdr, - pool_entry_t *pool, uint32_t index) -{ - odp_buffer_pool_t pool_hdl = pool->s.pool_hdl; - uint32_t pool_id = pool_handle_to_index(pool_hdl); - - if (pool_id >= ODP_CONFIG_BUFFER_POOLS) - ODP_ABORT("set_handle: Bad pool handle %u\n", pool_hdl); - - if (index > ODP_BUFFER_MAX_INDEX) - ODP_ERR("set_handle: Bad buffer index\n"); - - hdr->handle.pool_id = pool_id; - hdr->handle.index = index; -} - - int odp_buffer_pool_init_global(void) { uint32_t i; @@ -142,269 +89,244 @@ int odp_buffer_pool_init_global(void) return 0; } +/** + * Buffer pool creation + */ -static odp_buffer_hdr_t *index_to_hdr(pool_entry_t *pool, uint32_t index) -{ - odp_buffer_hdr_t *hdr; - - hdr = (odp_buffer_hdr_t *)(pool->s.buf_base + index * pool->s.buf_size); - return hdr; -} - - -static void add_buf_index(odp_buffer_chunk_hdr_t *chunk_hdr, uint32_t index) -{ - uint32_t i = chunk_hdr->chunk.num_bufs; - chunk_hdr->chunk.buf_index[i] = index; - chunk_hdr->chunk.num_bufs++; -} - - -static uint32_t rem_buf_index(odp_buffer_chunk_hdr_t *chunk_hdr) +odp_buffer_pool_t odp_buffer_pool_create(const char *name, + odp_shm_t shm, + odp_buffer_pool_param_t *params) { - uint32_t index; + odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID; + pool_entry_t *pool; uint32_t i; - i = chunk_hdr->chunk.num_bufs - 1; - index = chunk_hdr->chunk.buf_index[i]; - chunk_hdr->chunk.num_bufs--; - return index; -} - - -static odp_buffer_chunk_hdr_t *next_chunk(pool_entry_t *pool, - odp_buffer_chunk_hdr_t *chunk_hdr) -{ - uint32_t index; - - index = chunk_hdr->chunk.buf_index[ODP_BUFS_PER_CHUNK-1]; - if (index == NULL_INDEX) - return NULL; - else - return (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, index); -} - - -static odp_buffer_chunk_hdr_t *rem_chunk(pool_entry_t *pool) -{ - odp_buffer_chunk_hdr_t *chunk_hdr; - - chunk_hdr = pool->s.head; - if (chunk_hdr == NULL) { - /* Pool is empty */ - return NULL; - } - - pool->s.head = next_chunk(pool, chunk_hdr); - pool->s.free_bufs -= ODP_BUFS_PER_CHUNK; + /* Default initialization paramters */ + static _odp_buffer_pool_init_t default_init_params = { + .udata_size = 0, + .buf_init = NULL, + .buf_init_arg = NULL, + }; - /* unlink */ - rem_buf_index(chunk_hdr); - return chunk_hdr; -} + _odp_buffer_pool_init_t *init_params = &default_init_params; + if (params == NULL) + return ODP_BUFFER_POOL_INVALID; -static void add_chunk(pool_entry_t *pool, odp_buffer_chunk_hdr_t *chunk_hdr) -{ - if (pool->s.head) /* link pool head to the chunk */ - add_buf_index(chunk_hdr, pool->s.head->buf_hdr.index); - else - add_buf_index(chunk_hdr, NULL_INDEX); + /* Restriction for v1.0: All buffers are unsegmented */ + const int unsegmented = 1; - pool->s.head = chunk_hdr; - pool->s.free_bufs += ODP_BUFS_PER_CHUNK; -} + /* Restriction for v1.0: No zeroization support */ + const int zeroized = 0; + /* Restriction for v1.0: No udata support */ + uint32_t udata_stride = (init_params->udata_size > sizeof(void *)) ? + ODP_CACHE_LINE_SIZE_ROUNDUP(init_params->udata_size) : + 0; -static void check_align(pool_entry_t *pool, odp_buffer_hdr_t *hdr) -{ - if (!ODP_ALIGNED_CHECK_POWER_2(hdr->addr, pool->s.user_align)) { - ODP_ABORT("check_align: user data align error %p, align %zu\n", - hdr->addr, pool->s.user_align); - } - - if (!ODP_ALIGNED_CHECK_POWER_2(hdr, ODP_CACHE_LINE_SIZE)) { - ODP_ABORT("check_align: hdr align error %p, align %i\n", - hdr, ODP_CACHE_LINE_SIZE); - } -} - + uint32_t blk_size, buf_stride; -static void fill_hdr(void *ptr, pool_entry_t *pool, uint32_t index, - int buf_type) -{ - odp_buffer_hdr_t *hdr = (odp_buffer_hdr_t *)ptr; - size_t size = pool->s.hdr_size; - uint8_t *buf_data; - - if (buf_type == ODP_BUFFER_TYPE_CHUNK) - size = sizeof(odp_buffer_chunk_hdr_t); + switch (params->buf_type) { + case ODP_BUFFER_TYPE_RAW: + blk_size = params->buf_size; - switch (pool->s.buf_type) { - odp_raw_buffer_hdr_t *raw_hdr; - odp_packet_hdr_t *packet_hdr; - odp_timeout_hdr_t *tmo_hdr; - odp_any_buffer_hdr_t *any_hdr; + /* Optimize small raw buffers */ + if (blk_size > ODP_MAX_INLINE_BUF) + blk_size = ODP_ALIGN_ROUNDUP(blk_size, TAG_ALIGN); - case ODP_BUFFER_TYPE_RAW: - raw_hdr = ptr; - buf_data = raw_hdr->buf_data; + buf_stride = sizeof(odp_buffer_hdr_stride); break; + case ODP_BUFFER_TYPE_PACKET: - packet_hdr = ptr; - buf_data = packet_hdr->buf_data; + if (unsegmented) + blk_size = + ODP_CACHE_LINE_SIZE_ROUNDUP(params->buf_size); + else + blk_size = ODP_ALIGN_ROUNDUP(params->buf_size, + ODP_CONFIG_BUF_SEG_SIZE); + buf_stride = sizeof(odp_packet_hdr_stride); break; + case ODP_BUFFER_TYPE_TIMEOUT: - tmo_hdr = ptr; - buf_data = tmo_hdr->buf_data; + blk_size = 0; /* Timeouts have no block data, only metadata */ + buf_stride = sizeof(odp_timeout_hdr_stride); break; + case ODP_BUFFER_TYPE_ANY: - any_hdr = ptr; - buf_data = any_hdr->buf_data; + if (unsegmented) + blk_size = + ODP_CACHE_LINE_SIZE_ROUNDUP(params->buf_size); + else + blk_size = ODP_ALIGN_ROUNDUP(params->buf_size, + ODP_CONFIG_BUF_SEG_SIZE); + buf_stride = sizeof(odp_any_hdr_stride); break; - default: - ODP_ABORT("Bad buffer type\n"); - } - - memset(hdr, 0, size); - - set_handle(hdr, pool, index); - - hdr->addr = &buf_data[pool->s.buf_offset - pool->s.hdr_size]; - hdr->index = index; - hdr->size = pool->s.user_size; - hdr->pool_hdl = pool->s.pool_hdl; - hdr->type = buf_type; - - check_align(pool, hdr); -} - - -static void link_bufs(pool_entry_t *pool) -{ - odp_buffer_chunk_hdr_t *chunk_hdr; - size_t hdr_size; - size_t data_size; - size_t data_align; - size_t tot_size; - size_t offset; - size_t min_size; - uint64_t pool_size; - uintptr_t buf_base; - uint32_t index; - uintptr_t pool_base; - int buf_type; - - buf_type = pool->s.buf_type; - data_size = pool->s.user_size; - data_align = pool->s.user_align; - pool_size = pool->s.pool_size; - pool_base = (uintptr_t) pool->s.pool_base_addr; - - if (buf_type == ODP_BUFFER_TYPE_RAW) { - hdr_size = sizeof(odp_raw_buffer_hdr_t); - } else if (buf_type == ODP_BUFFER_TYPE_PACKET) { - hdr_size = sizeof(odp_packet_hdr_t); - } else if (buf_type == ODP_BUFFER_TYPE_TIMEOUT) { - hdr_size = sizeof(odp_timeout_hdr_t); - } else if (buf_type == ODP_BUFFER_TYPE_ANY) { - hdr_size = sizeof(odp_any_buffer_hdr_t); - } else - ODP_ABORT("odp_buffer_pool_create: Bad type %i\n", buf_type); - - - /* Chunk must fit into buffer data area.*/ - min_size = sizeof(odp_buffer_chunk_hdr_t) - hdr_size; - if (data_size < min_size) - data_size = min_size; - - /* Roundup data size to full cachelines */ - data_size = ODP_CACHE_LINE_SIZE_ROUNDUP(data_size); - - /* Min cacheline alignment for buffer header and data */ - data_align = ODP_CACHE_LINE_SIZE_ROUNDUP(data_align); - offset = ODP_CACHE_LINE_SIZE_ROUNDUP(hdr_size); - - /* Multiples of cacheline size */ - if (data_size > data_align) - tot_size = data_size + offset; - else - tot_size = data_align + offset; - - /* First buffer */ - buf_base = ODP_ALIGN_ROUNDUP(pool_base + offset, data_align) - offset; - - pool->s.hdr_size = hdr_size; - pool->s.buf_base = buf_base; - pool->s.buf_size = tot_size; - pool->s.buf_offset = offset; - index = 0; - - chunk_hdr = (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, index); - pool->s.head = NULL; - pool_size -= buf_base - pool_base; - - while (pool_size > ODP_BUFS_PER_CHUNK * tot_size) { - int i; - - fill_hdr(chunk_hdr, pool, index, ODP_BUFFER_TYPE_CHUNK); - - index++; - - for (i = 0; i < ODP_BUFS_PER_CHUNK - 1; i++) { - odp_buffer_hdr_t *hdr = index_to_hdr(pool, index); - - fill_hdr(hdr, pool, index, buf_type); - - add_buf_index(chunk_hdr, index); - index++; - } - - add_chunk(pool, chunk_hdr); - chunk_hdr = (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, - index); - pool->s.num_bufs += ODP_BUFS_PER_CHUNK; - pool_size -= ODP_BUFS_PER_CHUNK * tot_size; + default: + return ODP_BUFFER_POOL_INVALID; } -} - - -odp_buffer_pool_t odp_buffer_pool_create(const char *name, - void *base_addr, uint64_t size, - size_t buf_size, size_t buf_align, - int buf_type) -{ - odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID; - pool_entry_t *pool; - uint32_t i; + /* Find an unused buffer pool slot and iniitalize it as requested */ for (i = 0; i < ODP_CONFIG_BUFFER_POOLS; i++) { pool = get_pool_entry(i); LOCK(&pool->s.lock); + if (pool->s.pool_shm != ODP_SHM_INVALID) { + UNLOCK(&pool->s.lock); + continue; + } + + /* found free pool */ + size_t block_size, mdata_size, udata_size; - if (pool->s.buf_base == 0) { - /* found free pool */ + pool->s.flags.all = 0; + if (name == NULL) { + pool->s.name[0] = 0; + } else { strncpy(pool->s.name, name, ODP_BUFFER_POOL_NAME_LEN - 1); pool->s.name[ODP_BUFFER_POOL_NAME_LEN - 1] = 0; - pool->s.pool_base_addr = base_addr; - pool->s.pool_size = size; - pool->s.user_size = buf_size; - pool->s.user_align = buf_align; - pool->s.buf_type = buf_type; - - link_bufs(pool); - - UNLOCK(&pool->s.lock); + pool->s.flags.has_name = 1; + } - pool_hdl = pool->s.pool_hdl; - break; + pool->s.params = *params; + pool->s.init_params = *init_params; + + mdata_size = params->num_bufs * buf_stride; + udata_size = params->num_bufs * udata_stride; + + /* Optimize for short buffers: Data stored in buffer hdr */ + if (blk_size <= ODP_MAX_INLINE_BUF) + block_size = 0; + else + block_size = params->num_bufs * blk_size; + + pool->s.pool_size = ODP_PAGE_SIZE_ROUNDUP(mdata_size + + udata_size + + block_size); + + if (shm == ODP_SHM_NULL) { + shm = odp_shm_reserve(pool->s.name, + pool->s.pool_size, + ODP_PAGE_SIZE, 0); + if (shm == ODP_SHM_INVALID) { + UNLOCK(&pool->s.lock); + return ODP_BUFFER_INVALID; + } + pool->s.pool_base_addr = odp_shm_addr(shm); + } else { + odp_shm_info_t info; + if (odp_shm_info(shm, &info) != 0 || + info.size < pool->s.pool_size) { + UNLOCK(&pool->s.lock); + return ODP_BUFFER_POOL_INVALID; + } + pool->s.pool_base_addr = odp_shm_addr(shm); + void *page_addr = + ODP_ALIGN_ROUNDUP_PTR(pool->s.pool_base_addr, + ODP_PAGE_SIZE); + if (pool->s.pool_base_addr != page_addr) { + if (info.size < pool->s.pool_size + + ((size_t)page_addr - + (size_t)pool->s.pool_base_addr)) { + UNLOCK(&pool->s.lock); + return ODP_BUFFER_POOL_INVALID; + } + pool->s.pool_base_addr = page_addr; + } + pool->s.flags.user_supplied_shm = 1; } + pool->s.pool_shm = shm; + + /* Now safe to unlock since pool entry has been allocated */ UNLOCK(&pool->s.lock); + + pool->s.flags.unsegmented = unsegmented; + pool->s.flags.zeroized = zeroized; + pool->s.seg_size = unsegmented ? + blk_size : ODP_CONFIG_BUF_SEG_SIZE; + + uint8_t *udata_base_addr = pool->s.pool_base_addr + mdata_size; + uint8_t *block_base_addr = udata_base_addr + udata_size; + + /* bufcount will decrement down to 0 as we populate freelist */ + odp_atomic_store_u32(&pool->s.bufcount, params->num_bufs); + pool->s.buf_stride = buf_stride; + pool->s.high_wm = 0; + pool->s.low_wm = 0; + pool->s.headroom = 0; + pool->s.tailroom = 0; + _odp_atomic_ptr_store(&pool->s.buf_freelist, NULL, + _ODP_MEMMODEL_RLX); + _odp_atomic_ptr_store(&pool->s.blk_freelist, NULL, + _ODP_MEMMODEL_RLX); + + uint8_t *buf = udata_base_addr - buf_stride; + uint8_t *udat = udata_stride == 0 ? NULL : + block_base_addr - udata_stride; + + /* Init buffer common header and add to pool buffer freelist */ + do { + odp_buffer_hdr_t *tmp = + (odp_buffer_hdr_t *)(void *)buf; + + /* Iniitalize buffer metadata */ + tmp->allocator = ODP_CONFIG_MAX_THREADS; + tmp->flags.all = 0; + tmp->flags.zeroized = zeroized; + tmp->size = 0; + odp_atomic_store_u32(&tmp->ref_count, 0); + tmp->type = params->buf_type; + tmp->pool_hdl = pool->s.pool_hdl; + tmp->udata_addr = (void *)udat; + tmp->udata_size = init_params->udata_size; + tmp->segcount = 0; + tmp->segsize = pool->s.seg_size; + tmp->handle.handle = odp_buffer_encode_handle(tmp); + + /* Set 1st seg addr for zero-len buffers */ + tmp->addr[0] = NULL; + + /* Special case for short buffer data */ + if (blk_size <= ODP_MAX_INLINE_BUF) { + tmp->flags.hdrdata = 1; + if (blk_size > 0) { + tmp->segcount = 1; + tmp->addr[0] = &tmp->addr[1]; + tmp->size = blk_size; + } + } + + /* Push buffer onto pool's freelist */ + ret_buf(&pool->s, tmp); + buf -= buf_stride; + udat -= udata_stride; + } while (buf >= pool->s.pool_base_addr); + + /* Form block freelist for pool */ + uint8_t *blk = pool->s.pool_base_addr + pool->s.pool_size - + pool->s.seg_size; + + if (blk_size > ODP_MAX_INLINE_BUF) + do { + ret_blk(&pool->s, blk); + blk -= pool->s.seg_size; + } while (blk >= block_base_addr); + + /* Initialize pool statistics counters */ + odp_atomic_store_u64(&pool->s.bufallocs, 0); + odp_atomic_store_u64(&pool->s.buffrees, 0); + odp_atomic_store_u64(&pool->s.blkallocs, 0); + odp_atomic_store_u64(&pool->s.blkfrees, 0); + odp_atomic_store_u64(&pool->s.bufempty, 0); + odp_atomic_store_u64(&pool->s.blkempty, 0); + odp_atomic_store_u64(&pool->s.high_wm_count, 0); + odp_atomic_store_u64(&pool->s.low_wm_count, 0); + + pool_hdl = pool->s.pool_hdl; + break; } return pool_hdl; @@ -431,145 +353,126 @@ odp_buffer_pool_t odp_buffer_pool_lookup(const char *name) return ODP_BUFFER_POOL_INVALID; } - -odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl) +odp_buffer_t buffer_alloc(odp_buffer_pool_t pool_hdl, size_t size) { - pool_entry_t *pool; - odp_buffer_chunk_hdr_t *chunk; - odp_buffer_bits_t handle; - uint32_t pool_id = pool_handle_to_index(pool_hdl); - - pool = get_pool_entry(pool_id); - chunk = local_chunk[pool_id]; - - if (chunk == NULL) { - LOCK(&pool->s.lock); - chunk = rem_chunk(pool); - UNLOCK(&pool->s.lock); - - if (chunk == NULL) - return ODP_BUFFER_INVALID; - - local_chunk[pool_id] = chunk; + pool_entry_t *pool = odp_pool_to_entry(pool_hdl); + size_t totsize = pool->s.headroom + size + pool->s.tailroom; + odp_anybuf_t *buf; + uint8_t *blk; + + if ((pool->s.flags.unsegmented && totsize > pool->s.seg_size) || + (!pool->s.flags.unsegmented && totsize > ODP_CONFIG_BUF_MAX_SIZE)) + return ODP_BUFFER_INVALID; + + buf = (odp_anybuf_t *)(void *)get_buf(&pool->s); + + if (buf == NULL) + return ODP_BUFFER_INVALID; + + /* Get blocks for this buffer, if pool uses application data */ + if (buf->buf.size < totsize) { + size_t needed = totsize - buf->buf.size; + do { + blk = get_blk(&pool->s); + if (blk == NULL) { + ret_buf(&pool->s, &buf->buf); + return ODP_BUFFER_INVALID; + } + buf->buf.addr[buf->buf.segcount++] = blk; + needed -= pool->s.seg_size; + } while ((ssize_t)needed > 0); + buf->buf.size = buf->buf.segcount * pool->s.seg_size; } - if (chunk->chunk.num_bufs == 0) { - /* give the chunk buffer */ - local_chunk[pool_id] = NULL; - chunk->buf_hdr.type = pool->s.buf_type; + /* By default, buffers inherit their pool's zeroization setting */ + buf->buf.flags.zeroized = pool->s.flags.zeroized; - handle = chunk->buf_hdr.handle; - } else { - odp_buffer_hdr_t *hdr; - uint32_t index; - index = rem_buf_index(chunk); - hdr = index_to_hdr(pool, index); + if (buf->buf.type == ODP_BUFFER_TYPE_PACKET) { + packet_init(pool, &buf->pkt, size); - handle = hdr->handle; + if (pool->s.init_params.buf_init != NULL) + (*pool->s.init_params.buf_init) + (buf->buf.handle.handle, + pool->s.init_params.buf_init_arg); } - return handle.u32; + return odp_hdr_to_buf(&buf->buf); } - -void odp_buffer_free(odp_buffer_t buf) +odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl) { - odp_buffer_hdr_t *hdr; - uint32_t pool_id; - pool_entry_t *pool; - odp_buffer_chunk_hdr_t *chunk_hdr; - - hdr = odp_buf_to_hdr(buf); - pool_id = pool_handle_to_index(hdr->pool_hdl); - pool = get_pool_entry(pool_id); - chunk_hdr = local_chunk[pool_id]; - - if (chunk_hdr && chunk_hdr->chunk.num_bufs == ODP_BUFS_PER_CHUNK - 1) { - /* Current chunk is full. Push back to the pool */ - LOCK(&pool->s.lock); - add_chunk(pool, chunk_hdr); - UNLOCK(&pool->s.lock); - chunk_hdr = NULL; - } - - if (chunk_hdr == NULL) { - /* Use this buffer */ - chunk_hdr = (odp_buffer_chunk_hdr_t *)hdr; - local_chunk[pool_id] = chunk_hdr; - chunk_hdr->chunk.num_bufs = 0; - } else { - /* Add to current chunk */ - add_buf_index(chunk_hdr, hdr->index); - } + return buffer_alloc(pool_hdl, + odp_pool_to_entry(pool_hdl)->s.params.buf_size); } - -odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf) +void odp_buffer_free(odp_buffer_t buf) { - odp_buffer_hdr_t *hdr; - - hdr = odp_buf_to_hdr(buf); - return hdr->pool_hdl; + odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(buf); + pool_entry_t *pool = odp_buf_to_pool(buf_hdr); + ret_buf(&pool->s, buf_hdr); } - void odp_buffer_pool_print(odp_buffer_pool_t pool_hdl) { pool_entry_t *pool; - odp_buffer_chunk_hdr_t *chunk_hdr; - uint32_t i; uint32_t pool_id; pool_id = pool_handle_to_index(pool_hdl); pool = get_pool_entry(pool_id); - ODP_PRINT("Pool info\n"); - ODP_PRINT("---------\n"); - ODP_PRINT(" pool %i\n", pool->s.pool_hdl); - ODP_PRINT(" name %s\n", pool->s.name); - ODP_PRINT(" pool base %p\n", pool->s.pool_base_addr); - ODP_PRINT(" buf base 0x%"PRIxPTR"\n", pool->s.buf_base); - ODP_PRINT(" pool size 0x%"PRIx64"\n", pool->s.pool_size); - ODP_PRINT(" buf size %zu\n", pool->s.user_size); - ODP_PRINT(" buf align %zu\n", pool->s.user_align); - ODP_PRINT(" hdr size %zu\n", pool->s.hdr_size); - ODP_PRINT(" alloc size %zu\n", pool->s.buf_size); - ODP_PRINT(" offset to hdr %zu\n", pool->s.buf_offset); - ODP_PRINT(" num bufs %"PRIu64"\n", pool->s.num_bufs); - ODP_PRINT(" free bufs %"PRIu64"\n", pool->s.free_bufs); - - /* first chunk */ - chunk_hdr = pool->s.head; - - if (chunk_hdr == NULL) { - ODP_ERR(" POOL EMPTY\n"); - return; - } - - ODP_PRINT("\n First chunk\n"); - - for (i = 0; i < chunk_hdr->chunk.num_bufs - 1; i++) { - uint32_t index; - odp_buffer_hdr_t *hdr; - - index = chunk_hdr->chunk.buf_index[i]; - hdr = index_to_hdr(pool, index); - - ODP_PRINT(" [%i] addr %p, id %"PRIu32"\n", i, hdr->addr, - index); - } - - ODP_PRINT(" [%i] addr %p, id %"PRIu32"\n", i, chunk_hdr->buf_hdr.addr, - chunk_hdr->buf_hdr.index); - - /* next chunk */ - chunk_hdr = next_chunk(pool, chunk_hdr); + uint32_t bufcount = odp_atomic_load_u32(&pool->s.bufcount); + uint32_t blkcount = odp_atomic_load_u32(&pool->s.blkcount); + uint64_t bufallocs = odp_atomic_load_u64(&pool->s.bufallocs); + uint64_t buffrees = odp_atomic_load_u64(&pool->s.buffrees); + uint64_t blkallocs = odp_atomic_load_u64(&pool->s.blkallocs); + uint64_t blkfrees = odp_atomic_load_u64(&pool->s.blkfrees); + uint64_t bufempty = odp_atomic_load_u64(&pool->s.bufempty); + uint64_t blkempty = odp_atomic_load_u64(&pool->s.blkempty); + uint64_t hiwmct = odp_atomic_load_u64(&pool->s.high_wm_count); + uint64_t lowmct = odp_atomic_load_u64(&pool->s.low_wm_count); + + ODP_DBG("Pool info\n"); + ODP_DBG("---------\n"); + ODP_DBG(" pool %i\n", pool->s.pool_hdl); + ODP_DBG(" name %s\n", + pool->s.flags.has_name ? pool->s.name : "Unnamed Pool"); + ODP_DBG(" pool type %s\n", + pool->s.params.buf_type == ODP_BUFFER_TYPE_RAW ? "raw" : + (pool->s.params.buf_type == ODP_BUFFER_TYPE_PACKET ? "packet" : + (pool->s.params.buf_type == ODP_BUFFER_TYPE_TIMEOUT ? "timeout" : + (pool->s.params.buf_type == ODP_BUFFER_TYPE_ANY ? "any" : + "unknown")))); + ODP_DBG(" pool storage %sODP managed\n", + pool->s.flags.user_supplied_shm ? + "application provided, " : ""); + ODP_DBG(" pool status %s\n", + pool->s.flags.quiesced ? "quiesced" : "active"); + ODP_DBG(" pool opts %s, %s, %s\n", + pool->s.flags.unsegmented ? "unsegmented" : "segmented", + pool->s.flags.zeroized ? "zeroized" : "non-zeroized", + pool->s.flags.predefined ? "predefined" : "created"); + ODP_DBG(" pool base %p\n", pool->s.pool_base_addr); + ODP_DBG(" pool size %zu (%zu pages)\n", + pool->s.pool_size, pool->s.pool_size / ODP_PAGE_SIZE); + ODP_DBG(" udata size %zu\n", pool->s.init_params.udata_size); + ODP_DBG(" buf size %zu\n", pool->s.params.buf_size); + ODP_DBG(" num bufs %u\n", pool->s.params.num_bufs); + ODP_DBG(" bufs in use %u\n", bufcount); + ODP_DBG(" buf allocs %lu\n", bufallocs); + ODP_DBG(" buf frees %lu\n", buffrees); + ODP_DBG(" buf empty %lu\n", bufempty); + ODP_DBG(" blk size %zu\n", + pool->s.seg_size > ODP_MAX_INLINE_BUF ? pool->s.seg_size : 0); + ODP_DBG(" blks available %u\n", blkcount); + ODP_DBG(" blk allocs %lu\n", blkallocs); + ODP_DBG(" blk frees %lu\n", blkfrees); + ODP_DBG(" blk empty %lu\n", blkempty); + ODP_DBG(" high wm count %lu\n", hiwmct); + ODP_DBG(" low wm count %lu\n", lowmct); +} - if (chunk_hdr) { - ODP_PRINT(" Next chunk\n"); - ODP_PRINT(" addr %p, id %"PRIu32"\n", chunk_hdr->buf_hdr.addr, - chunk_hdr->buf_hdr.index); - } - ODP_PRINT("\n"); +odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf) +{ + return odp_buf_to_hdr(buf)->pool_hdl; } diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index f8fd8ef..8deae3d 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -23,17 +23,9 @@ static inline uint8_t parse_ipv6(odp_packet_hdr_t *pkt_hdr, void odp_packet_init(odp_packet_t pkt) { odp_packet_hdr_t *const pkt_hdr = odp_packet_hdr(pkt); - const size_t start_offset = ODP_FIELD_SIZEOF(odp_packet_hdr_t, buf_hdr); - uint8_t *start; - size_t len; - - start = (uint8_t *)pkt_hdr + start_offset; - len = ODP_OFFSETOF(odp_packet_hdr_t, buf_data) - start_offset; - memset(start, 0, len); + pool_entry_t *pool = odp_buf_to_pool(&pkt_hdr->buf_hdr); - pkt_hdr->l2_offset = ODP_PACKET_OFFSET_INVALID; - pkt_hdr->l3_offset = ODP_PACKET_OFFSET_INVALID; - pkt_hdr->l4_offset = ODP_PACKET_OFFSET_INVALID; + packet_init(pool, pkt_hdr, 0); } odp_packet_t odp_packet_from_buffer(odp_buffer_t buf) @@ -63,7 +55,7 @@ uint8_t *odp_packet_addr(odp_packet_t pkt) uint8_t *odp_packet_data(odp_packet_t pkt) { - return odp_packet_addr(pkt) + odp_packet_hdr(pkt)->frame_offset; + return odp_packet_addr(pkt) + odp_packet_hdr(pkt)->headroom; } @@ -130,20 +122,13 @@ void odp_packet_set_l4_offset(odp_packet_t pkt, size_t offset) int odp_packet_is_segmented(odp_packet_t pkt) { - odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)pkt); - - if (buf_hdr->scatter.num_bufs == 0) - return 0; - else - return 1; + return odp_packet_hdr(pkt)->buf_hdr.segcount > 1; } int odp_packet_seg_count(odp_packet_t pkt) { - odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)pkt); - - return (int)buf_hdr->scatter.num_bufs + 1; + return odp_packet_hdr(pkt)->buf_hdr.segcount; } @@ -169,7 +154,7 @@ void odp_packet_parse(odp_packet_t pkt, size_t len, size_t frame_offset) uint8_t ip_proto = 0; pkt_hdr->input_flags.eth = 1; - pkt_hdr->frame_offset = frame_offset; + pkt_hdr->l2_offset = frame_offset; pkt_hdr->frame_len = len; if (len > ODPH_ETH_LEN_MAX) @@ -329,8 +314,6 @@ void odp_packet_print(odp_packet_t pkt) len += snprintf(&str[len], n-len, " output_flags 0x%x\n", hdr->output_flags.all); len += snprintf(&str[len], n-len, - " frame_offset %u\n", hdr->frame_offset); - len += snprintf(&str[len], n-len, " l2_offset %u\n", hdr->l2_offset); len += snprintf(&str[len], n-len, " l3_offset %u\n", hdr->l3_offset); @@ -357,14 +340,13 @@ int odp_packet_copy(odp_packet_t pkt_dst, odp_packet_t pkt_src) if (pkt_dst == ODP_PACKET_INVALID || pkt_src == ODP_PACKET_INVALID) return -1; - if (pkt_hdr_dst->buf_hdr.size < - pkt_hdr_src->frame_len + pkt_hdr_src->frame_offset) + if (pkt_hdr_dst->buf_hdr.size < pkt_hdr_src->frame_len) return -1; /* Copy packet header */ start_dst = (uint8_t *)pkt_hdr_dst + start_offset; start_src = (uint8_t *)pkt_hdr_src + start_offset; - len = ODP_OFFSETOF(odp_packet_hdr_t, buf_data) - start_offset; + len = sizeof(odp_packet_hdr_t) - start_offset; memcpy(start_dst, start_src, len); /* Copy frame payload */ @@ -373,13 +355,6 @@ int odp_packet_copy(odp_packet_t pkt_dst, odp_packet_t pkt_src) len = pkt_hdr_src->frame_len; memcpy(start_dst, start_src, len); - /* Copy useful things from the buffer header */ - pkt_hdr_dst->buf_hdr.cur_offset = pkt_hdr_src->buf_hdr.cur_offset; - - /* Create a copy of the scatter list */ - odp_buffer_copy_scatter(odp_packet_to_buffer(pkt_dst), - odp_packet_to_buffer(pkt_src)); - return 0; } diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index 1318bcd..b68a7c7 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -11,6 +11,7 @@ #include <odp_buffer.h> #include <odp_buffer_internal.h> #include <odp_buffer_pool_internal.h> +#include <odp_buffer_inlines.h> #include <odp_internal.h> #include <odp_shared_memory.h> #include <odp_schedule_internal.h> diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index cc84e11..a8f1938 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -83,8 +83,8 @@ int odp_schedule_init_global(void) { odp_shm_t shm; odp_buffer_pool_t pool; - void *pool_base; int i, j; + odp_buffer_pool_param_t params; ODP_DBG("Schedule init ... "); @@ -99,20 +99,12 @@ int odp_schedule_init_global(void) return -1; } - shm = odp_shm_reserve("odp_sched_pool", - SCHED_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); + params.buf_size = sizeof(queue_desc_t); + params.buf_align = ODP_CACHE_LINE_SIZE; + params.num_bufs = SCHED_POOL_SIZE/sizeof(queue_desc_t); + params.buf_type = ODP_BUFFER_TYPE_RAW; - pool_base = odp_shm_addr(shm); - - if (pool_base == NULL) { - ODP_ERR("Schedule init: Shm reserve failed.\n"); - return -1; - } - - pool = odp_buffer_pool_create("odp_sched_pool", pool_base, - SCHED_POOL_SIZE, sizeof(queue_desc_t), - ODP_CACHE_LINE_SIZE, - ODP_BUFFER_TYPE_RAW); + pool = odp_buffer_pool_create("odp_sched_pool", ODP_SHM_NULL, ¶ms); if (pool == ODP_BUFFER_POOL_INVALID) { ODP_ERR("Schedule init: Pool create failed.\n"); diff --git a/platform/linux-generic/odp_timer.c b/platform/linux-generic/odp_timer.c index 313c713..914cb58 100644 --- a/platform/linux-generic/odp_timer.c +++ b/platform/linux-generic/odp_timer.c @@ -5,9 +5,10 @@ */ #include <odp_timer.h> -#include <odp_timer_internal.h> #include <odp_time.h> #include <odp_buffer_pool_internal.h> +#include <odp_buffer_inlines.h> +#include <odp_timer_internal.h> #include <odp_internal.h> #include <odp_atomic.h> #include <odp_spinlock.h> diff --git a/test/api_test/odp_timer_ping.c b/test/api_test/odp_timer_ping.c index 7704181..1566f4f 100644 --- a/test/api_test/odp_timer_ping.c +++ b/test/api_test/odp_timer_ping.c @@ -319,9 +319,8 @@ int main(int argc ODP_UNUSED, char *argv[] ODP_UNUSED) ping_arg_t pingarg; odp_queue_t queue; odp_buffer_pool_t pool; - void *pool_base; int i; - odp_shm_t shm; + odp_buffer_pool_param_t params; if (odp_test_global_init() != 0) return -1; @@ -334,14 +333,14 @@ int main(int argc ODP_UNUSED, char *argv[] ODP_UNUSED) /* * Create message pool */ - shm = odp_shm_reserve("msg_pool", - MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); - pool_base = odp_shm_addr(shm); - - pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, - BUF_SIZE, - ODP_CACHE_LINE_SIZE, - ODP_BUFFER_TYPE_RAW); + + params.buf_size = BUF_SIZE; + params.buf_align = 0; + params.num_bufs = MSG_POOL_SIZE/BUF_SIZE; + params.buf_type = ODP_BUFFER_TYPE_RAW; + + pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, ¶ms); + if (pool == ODP_BUFFER_POOL_INVALID) { LOG_ERR("Pool create failed.\n"); return -1; diff --git a/test/validation/odp_crypto.c b/test/validation/odp_crypto.c index 9342aca..e329b05 100644 --- a/test/validation/odp_crypto.c +++ b/test/validation/odp_crypto.c @@ -31,8 +31,7 @@ CU_SuiteInfo suites[] = { int main(void) { - odp_shm_t shm; - void *pool_base; + odp_buffer_pool_param_t params; odp_buffer_pool_t pool; odp_queue_t out_queue; @@ -42,21 +41,13 @@ int main(void) } odp_init_local(); - shm = odp_shm_reserve("shm_packet_pool", - SHM_PKT_POOL_SIZE, - ODP_CACHE_LINE_SIZE, 0); + params.buf_size = SHM_PKT_POOL_BUF_SIZE; + params.buf_align = 0; + params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; + params.buf_type = ODP_BUFFER_TYPE_PACKET; - pool_base = odp_shm_addr(shm); - if (!pool_base) { - fprintf(stderr, "Packet pool allocation failed.\n"); - return -1; - } + pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, ¶ms); - pool = odp_buffer_pool_create("packet_pool", pool_base, - SHM_PKT_POOL_SIZE, - SHM_PKT_POOL_BUF_SIZE, - ODP_CACHE_LINE_SIZE, - ODP_BUFFER_TYPE_PACKET); if (ODP_BUFFER_POOL_INVALID == pool) { fprintf(stderr, "Packet pool creation failed.\n"); return -1; @@ -67,20 +58,14 @@ int main(void) fprintf(stderr, "Crypto outq creation failed.\n"); return -1; } - shm = odp_shm_reserve("shm_compl_pool", - SHM_COMPL_POOL_SIZE, - ODP_CACHE_LINE_SIZE, - ODP_SHM_SW_ONLY); - pool_base = odp_shm_addr(shm); - if (!pool_base) { - fprintf(stderr, "Completion pool allocation failed.\n"); - return -1; - } - pool = odp_buffer_pool_create("compl_pool", pool_base, - SHM_COMPL_POOL_SIZE, - SHM_COMPL_POOL_BUF_SIZE, - ODP_CACHE_LINE_SIZE, - ODP_BUFFER_TYPE_RAW); + + params.buf_size = SHM_COMPL_POOL_BUF_SIZE; + params.buf_align = 0; + params.num_bufs = SHM_COMPL_POOL_SIZE/SHM_COMPL_POOL_BUF_SIZE; + params.buf_type = ODP_BUFFER_TYPE_RAW; + + pool = odp_buffer_pool_create("compl_pool", ODP_SHM_NULL, ¶ms); + if (ODP_BUFFER_POOL_INVALID == pool) { fprintf(stderr, "Completion pool creation failed.\n"); return -1; diff --git a/test/validation/odp_queue.c b/test/validation/odp_queue.c index 09dba0e..9d0f3d7 100644 --- a/test/validation/odp_queue.c +++ b/test/validation/odp_queue.c @@ -16,21 +16,14 @@ static int queue_contest = 0xff; static int test_odp_buffer_pool_init(void) { odp_buffer_pool_t pool; - void *pool_base; - odp_shm_t shm; + odp_buffer_pool_param_t params; - shm = odp_shm_reserve("msg_pool", - MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); + params.buf_size = 0; + params.buf_align = ODP_CACHE_LINE_SIZE; + params.num_bufs = 1024 * 10; + params.buf_type = ODP_BUFFER_TYPE_RAW; - pool_base = odp_shm_addr(shm); - - if (NULL == pool_base) { - printf("Shared memory reserve failed.\n"); - return -1; - } - - pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, 0, - ODP_CACHE_LINE_SIZE, ODP_BUFFER_TYPE_RAW); + pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, ¶ms); if (ODP_BUFFER_POOL_INVALID == pool) { printf("Pool create failed.\n");
Restructure ODP buffer pool internals to support new APIs. Implements new odp_buffer_pool_create() API. Signed-off-by: Bill Fischofer <bill.fischofer@linaro.org> --- example/generator/odp_generator.c | 19 +- example/ipsec/odp_ipsec.c | 57 +- example/l2fwd/odp_l2fwd.c | 19 +- example/odp_example/odp_example.c | 18 +- example/packet/odp_pktio.c | 19 +- example/timer/odp_timer_test.c | 13 +- .../linux-generic/include/api/odp_buffer_pool.h | 91 ++- platform/linux-generic/include/api/odp_config.h | 10 + .../linux-generic/include/api/odp_platform_types.h | 9 + .../linux-generic/include/api/odp_shared_memory.h | 10 +- .../linux-generic/include/odp_buffer_inlines.h | 157 +++++ .../linux-generic/include/odp_buffer_internal.h | 137 ++-- .../include/odp_buffer_pool_internal.h | 278 ++++++-- .../linux-generic/include/odp_packet_internal.h | 50 +- .../linux-generic/include/odp_timer_internal.h | 11 +- platform/linux-generic/odp_buffer.c | 31 +- platform/linux-generic/odp_buffer_pool.c | 711 +++++++++------------ platform/linux-generic/odp_packet.c | 41 +- platform/linux-generic/odp_queue.c | 1 + platform/linux-generic/odp_schedule.c | 20 +- platform/linux-generic/odp_timer.c | 3 +- test/api_test/odp_timer_ping.c | 19 +- test/validation/odp_crypto.c | 43 +- test/validation/odp_queue.c | 19 +- 24 files changed, 1024 insertions(+), 762 deletions(-) create mode 100644 platform/linux-generic/include/odp_buffer_inlines.h