@@ -35,19 +35,25 @@ extern "C" {
/**
* Buffer pool parameters
* Used to communicate buffer pool creation options.
+ *
+ * @see ODP_CONFIG_PACKET_BUF_LEN_MIN, ODP_CONFIG_BUFFER_ALIGN_MIN,
+ * ODP_CONFIG_BUFFER_ALIGN_MAX
*/
typedef struct odp_buffer_pool_param_t {
- uint32_t buf_size; /**< Buffer size in bytes. The maximum
- number of bytes application will
- store in each buffer. For packets, this
- is the maximum packet data length, and
- configured headroom and tailroom will be
- added to this number */
- uint32_t buf_align; /**< Minimum buffer alignment in bytes.
- Valid values are powers of two. Use 0
- for default alignment. Default will
- always be a multiple of 8. */
- uint32_t num_bufs; /**< Number of buffers in the pool */
+ uint32_t buf_size; /**< Minimum buffer size in bytes. For packets,
+ this is the minimum segment buffer length,
+ which includes possible head-/tailroom bytes.
+ Use 0 for the default size of the buffer type
+ (e.g. for timeouts or min packet segment
+ length).*/
I continue to have difficulty with understanding how the implementation is expected to use this interpretation of buf_size to calculate how much storage should be reserved for the buffer pool. The implementation needs to know this and under the former definition that was straightforward since the application was specifying how large each buffer may be. Minimums do not do this.
Application specifies the minimum size (e.g. 100 bytes). Implementation can round it up to a sensible value (e.g. 128 bytes). odp_buffer_size() and odp_packet_seg_buf_len() return the actual buffer length (e.g. 128 bytes). Pool size is num_bufs * (buf_size + round up + internal headers, per buffer overheads, etc).
+ uint32_t buf_align; /**< Minimum buffer alignment in bytes. Valid values
+ are powers of two. Use 0 for default
+ alignment. Default will always be a multiple
+ of 8. */
+ uint32_t num_bufs; /**< Number of buffers in the pool. For packets,