From patchwork Sat May 3 00:42:39 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Holmes X-Patchwork-Id: 29603 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f69.google.com (mail-oa0-f69.google.com [209.85.219.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 83C94202E7 for ; Sat, 3 May 2014 00:43:02 +0000 (UTC) Received: by mail-oa0-f69.google.com with SMTP id i11sf7493265oag.8 for ; Fri, 02 May 2014 17:43:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:subject :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:errors-to:sender :x-original-sender:x-original-authentication-results:mailing-list :content-type:content-transfer-encoding; bh=t7VmQ52Sb9ApZpMd5ylEuOrOqkkRrjpJJy15CAb4M2k=; b=FnJMD2Z7qgISXcOX9EZVmQ9HspLjrsS8s4D1yOu0mJcDtS19z7aebtdESciflpcQJj C9omf12QbMsuj0a76hgSeCUDdOzPdYHU3l0MNGXeZehAMo5knXHpQTbf1N8WhAe2GBp4 KEGiuKDofeGZl+r4NOU8XcfWAPJG+lN9ehscZMyMCHP+Mm5a0ac8qBUPqCZGDgP/w6L5 9jp0YSxmhc2WjS7edh863TE5v2rkBFwUJ2YEv7MGoziN8Oe6hq0XHTyoAoVgAyEIPA3J lnyCgjpdMm+MWlNj4MoyRDWbwZGEkKxh9zybBCYnLjqfxfRO+J73bXh0Wrr+uSR8oWRQ e8gg== X-Gm-Message-State: ALoCoQn/0mOnHAGUkvI6tk6RlnF2S+gZch9JZRFTO4pnXVpkZlILElx7kltflOSKvxwsyBLv4973 X-Received: by 10.42.223.10 with SMTP id ii10mr9326751icb.21.1399077781711; Fri, 02 May 2014 17:43:01 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.84.239 with SMTP id l102ls1826545qgd.81.gmail; Fri, 02 May 2014 17:43:01 -0700 (PDT) X-Received: by 10.220.167.2 with SMTP id o2mr15448604vcy.8.1399077781549; Fri, 02 May 2014 17:43:01 -0700 (PDT) Received: from mail-ve0-f170.google.com (mail-ve0-f170.google.com [209.85.128.170]) by mx.google.com with ESMTPS id cx1si143546vdb.38.2014.05.02.17.43.01 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 02 May 2014 17:43:01 -0700 (PDT) Received-SPF: none (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) client-ip=209.85.128.170; Received: by mail-ve0-f170.google.com with SMTP id db11so1495478veb.15 for ; Fri, 02 May 2014 17:43:01 -0700 (PDT) X-Received: by 10.52.13.41 with SMTP id e9mr5806811vdc.21.1399077781131; Fri, 02 May 2014 17:43:01 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp140358vcb; Fri, 2 May 2014 17:43:00 -0700 (PDT) X-Received: by 10.140.43.100 with SMTP id d91mr25802116qga.11.1399077780527; Fri, 02 May 2014 17:43:00 -0700 (PDT) Received: from ip-10-141-164-156.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id q50si250011qgd.47.2014.05.02.17.42.59 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 02 May 2014 17:43:00 -0700 (PDT) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-141-164-156.ec2.internal) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1WgO24-0003bs-Jx; Sat, 03 May 2014 00:42:24 +0000 Received: from mail-qg0-f45.google.com ([209.85.192.45]) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1WgO1z-0003af-3H for lng-odp@lists.linaro.org; Sat, 03 May 2014 00:42:19 +0000 Received: by mail-qg0-f45.google.com with SMTP id a108so5466208qge.18 for ; Fri, 02 May 2014 17:42:48 -0700 (PDT) X-Received: by 10.224.36.129 with SMTP id t1mr24521792qad.88.1399077768376; Fri, 02 May 2014 17:42:48 -0700 (PDT) Received: from fedora1.holmesfamily.ws (c-98-221-136-245.hsd1.nj.comcast.net. [98.221.136.245]) by mx.google.com with ESMTPSA id f3sm1251736qag.7.2014.05.02.17.42.47 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 02 May 2014 17:42:47 -0700 (PDT) From: Mike Holmes To: lng-odp@lists.linaro.org Date: Fri, 2 May 2014 20:42:39 -0400 Message-Id: <1399077759-35167-1-git-send-email-mike.holmes@linaro.org> X-Mailer: git-send-email 1.9.1 X-Topics: patch Subject: [lng-odp] [PATCH] Documentation Split mainpage from headder X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: mike.holmes@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Mike Holmes --- doc/odp.dox | 255 +++++++++++++++++++++++++++++++++++++ include/odp.h | 250 ------------------------------------ platform/linux-generic/Doxyfile.in | 4 +- 3 files changed, 257 insertions(+), 252 deletions(-) create mode 100644 doc/odp.dox diff --git a/doc/odp.dox b/doc/odp.dox new file mode 100644 index 0000000..328c7bb --- /dev/null +++ b/doc/odp.dox @@ -0,0 +1,255 @@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @mainpage + * + * @section sec_1 Introduction + * + * OpenDataPlane (ODP) provides a data plane application programming + * environment that is easy to use, high performance, and portable + * between networking SoCs. This documentation is both a user guide + * for developers who wish to use ODP and a detailed reference for ODP + * programmers covering APIs, data structures, files, etc. It should + * also be useful for those wishing to implement ODP on other + * platforms. + * + * @image html overview.png + * + * ODP consists of a common layer and an implementation layer. + * Applications written to the common layer are portable across all + * ODP implementations. To compile and run an ODP application, it is + * compiled against a specific ODP implementation layer. The purpose + * of the implementation layer is to provide an optimal mapping of ODP + * APIs to the underlying capabilities (including hardware + * co-processing and acceleration support) of of SoCs hosting ODP + * implementations. As a bootstrapping mechanism for applications, as + * well as to provide a model for ODP implementers, ODP provides a + * 'linux-generic' reference implementation designed to run on any SoC + * which has a Linux kernel. While linux-generic is not a performance + * target, it does provide a starting point for ODP implementers and + * application programmers alike. As a pure software implementation + * of ODP, linux-generic is designed to provide best-in-class performance + * for general Linux data plane support. + * + * @section Staging + * + * ODP is a work in progress and is expected to evolve significantly + * as it develops. Since the goal of ODP is to provide portability + * across disparate platforms and architectures while still providing + * near-native levels of performance on each conforming + * implementation, it is expected that the ODP architecture and the + * APIs presented here will evolve based on the experience in + * implementing and tuning ODP for operation on multiple platforms. + * For the time being, then, the goal here is not so much as to + * present a stable API, but rather a usable one that can be built + * upon to reach a clearly defined end goal. + * + * ODP releases will follow a standard major/minor/revision + * three-level naming designation. The intent is that APIs will be + * stable across major revisions such that existing APIs will work + * unchanged within a major revision, though minor revisions may add + * new APIs. Across major revisions some API changes may make + * application source changes necesary. These will be clearly noted + * in the release notes associated with any given ODP release. + * + * This consistency will commence with the 1.0.0 release of ODP, which + * is expected later in 2014. Pre-release 1 it should be expected + * that minor revisions may require API source changes as ODP is still + * "growing its roots". This is release 0.1.0 of ODP and is being + * made available as a "public preview" to the open source community + * for comment/feedback/evaluation. + * + * @section contact Contact Details + * - The main web site is http://www.opendataplane.org/ + * - The git repo is https://git.linaro.org/lng/odp.git + * - Bug tracking https://launchpad.net/linaro-odp + * + * + * @section sec_2 User guide + * + * @subsection sub2_1 The ODP API + * + * This file (odp.h) is the main ODP API file. User should include only this + * file to keep portability since structure and naming of sub header files + * may be change between implementations. + * + * @subsection sub2_2 Threading + * + * ODP does not specify a threading model. Applications can use + * processes or pthreads, or Roll-Your-Own (RYO) threading/fibre + * mechanisms for multi-threading as needed. Creation and control of + * threads is the responsibility of the ODP application. For optimal + * performance on many-core SoCs, it is recommended that threads be + * run on dedicated cores. ODP provides high-level APIs for core + * enumeration and assignment while the corresponding ODP + * implementation layer provides the appropriate mechanisms to realize + * these functions. + * + * Threads used for ODP processing should be pinned into separate cores. + * Commonly these threads process packets in a run-to-completion loop. + * Application should avoid blocking threads used for ODP processing, + * since it may cause blocking on other threads/cores. + * + * @subsection sub2_3 ODP initialisation + * + * Before calling any other ODP API functions, ODP library must be + * initialised by calling odp_init_global() once and odp_init_local() + * on each of the cores sharing the same ODP environment (instance). + * + * @subsection sub2_4 API Categories + * + * APIs provided by ODP cover the following areas: + * + * @subsubsection memory_management Memory Management + * + * This includes macros and other APIs to control memory alignments + * of data structures as well as allocation/deallocation services + * for ODP-managed objects. Note that ODP does not wrapper malloc() + * or similar platform specific APIs for the sake of wrappering. + * + * @subsubsection buffer_management Buffer Management + * + * This includes APIs for defining and managing buffer pools used + * for packets and other bulk purposes. Note that the allocation + * and release of buffers from buffer pools is not something done + * explicitly by ODP applications, but rather by APIs that use these + * buffers. This is because in most SoCs, actual buffer allocation + * and release is accelerated and performed by hardware. Software's + * role in buffer management is normally reserved to allocating + * large chunks of memory which are then given to hardware for + * automatic management as pools of buffers. In this way the ODP + * application operates independent of how buffers are managed by + * the underlying ODP implementation. + * + * @subsubsection packet_management Packet Management + * + * This includes APIs and accessor functions for packet descriptors + * as well as packet receipt and transmission. + * + * @subsubsection syncronisation Synchronization + * + * This includes APIs and related functions for synchronization + * involving other ODP APIs, such as barriers and related atomics. + * Again, as ODP does not specify a threading model applications + * make use whatever synchronization primitives are native to the + * model they use. + * + * @subsubsection core_enumeration Core Enumeration and managment + * + * This includes APIs to allow applications to enumerate and + * reference cores and per-core data structures. + * + * @subsection sub2_5 Miscellaneous Facilities + * + * ODP includes miscellaneous facilities for compiler hints and + * optimizations common in GCC. [Not sure if we want to consider + * these an "API" per se]. + * + * @subsection sub2_6 Application Programming Model + * + * ODP supports applications that execute using a "run to completion" + * programming model. This means that once dispatched, application + * threads are not interrupted by the kernel or other scheduling + * entity. + * + * Application threads receive work requests as \a events that are + * delivered on application and/or implementation defined + * \a queues. ODP application code would thus normally be + * structured as follows: + * + * @code + * #include + * ...other needed #includes + * + * int main (int argc, char *argv[]) + * { + * ...application-specific initialization + * odp_init_global(); + * + * ...launch threads + * ...wait for threads to terminate + * } + * + * void worker_thread (parameters) + * { + * odp_init_local(); + * + * while (1) { + * do_work(get_work()); // Replace with ODP calls when defined + * } + * + * } + * @endcode + * + * Events are receved on input queues and are processed until they are + * placed on an output queue of some sort. The thread then gets the + * next event to be processed from an input queue and repeats the + * process. + * + * @subsection sub3_1 Asynchronous Operations + * + * Note that work to be performed by a thread may require access to an + * asynchronous function that takes a significant amount of time to + * complete. In such cases the event is forwarded to another worker + * thread or hardware accelerator, depending on the implementation, by + * placing it on anothert queue, which is an output queue of the + * thread making the request. This event in turn is received and + * processed by the thread/accelerator that handles it via its input + * queue. When this aysynchronous event is complete, the event is + * placed on the handler's output queue, which feeds back to the + * original requestor's input queue. When the requesting thread next + * receives this event it resumes processing of the event following + * the asynchronous event and works on it either until it is ready for + * final disposition, or until another asynchronous operation is + * required to process the event. + * + * @subsection sub3_2 Queue Linkages + * + * The mapping of input and output queues that connect worker threads + * to accelerators and related offload functions is a cooperation + * between the implementation and the ODP application. The + * implementation defines the service funtions that are available to + * worker threads (e.g., cypto offload services) and as part of that + * definition defines the queue structure that connects requests to + * those services as well as the outputs from those services that + * connect back to the requesting workers. The ODP application, in + * turn, defines the number of worker threads and how they cooperate + * among themselves. Note that the application may use ODP core + * enumeration APIs to decide how many such worker threads should be + * deployed. + * + * @subsection sub3_3 Packet I/O + * + * In ODP packet I/O is implicit by reading from and writing to queues + * associated with interfaces. An ODP application receives packets by + * dequeuing an event from an input queue associated with an I/O + * interface. This either triggers a packet read or (more likely) + * simply provides the next (queued) packet from the associated + * interface. The actual mechanism used to effect the receipt of the + * packet is left to the ODP implementation and may involve any + * combination of sofware and/or hardware operations. + * + * Similarly, packet transmission is performed by writing a packet to + * an output queue associated with an I/O interface. Again, this + * schedules the packet for output using some combination of software + * and/or hardware as determined by the implementation. ODP applications + * themselves, therefore, are freed from the details of how packet I/O + * is performed or buffered to minimize latencies. The latter is the + * concern of the ODP implementation to achieve optimal results for + * the platform supporting the implementation. + * + * @subsection How to Use this Reference + * + * This reference provides an overview of each data structure and API + * function, along with a graphical representation of the various + * structural dependencies among them. When using the HTML version of + * this reference, all links are dynamic and provide access to the + * underlying implementation source files as well, thus providing both + * a ready reference to API parameters and syntax, as well as + * convenient access to the actual implementation behind them to + * further programmer understandng. + */ diff --git a/include/odp.h b/include/odp.h index 9bb68a2..0ee3faf 100644 --- a/include/odp.h +++ b/include/odp.h @@ -11,256 +11,6 @@ * */ -/** - * @mainpage - * - * @section sec_1 Introduction - * - * OpenDataPlane (ODP) provides a data plane application programming - * environment that is easy to use, high performance, and portable - * between networking SoCs. This documentation is both a user guide - * for developers who wish to use ODP and a detailed reference for ODP - * programmers covering APIs, data structures, files, etc. It should - * also be useful for those wishing to implement ODP on other - * platforms. - * - * @image html overview.png - * - * ODP consists of a common layer and an implementation layer. - * Applications written to the common layer are portable across all - * ODP implementations. To compile and run an ODP application, it is - * compiled against a specific ODP implementation layer. The purpose - * of the implementation layer is to provide an optimal mapping of ODP - * APIs to the underlying capabilities (including hardware - * co-processing and acceleration support) of of SoCs hosting ODP - * implementations. As a bootstrapping mechanism for applications, as - * well as to provide a model for ODP implementers, ODP provides a - * 'linux-generic' reference implementation designed to run on any SoC - * which has a Linux kernel. While linux-generic is not a performance - * target, it does provide a starting point for ODP implementers and - * application programmers alike. As a pure software implementation - * of ODP, linux-generic is designed to provide best-in-class performance - * for general Linux data plane support. - * - * @section Staging - * - * ODP is a work in progress and is expected to evolve significantly - * as it develops. Since the goal of ODP is to provide portability - * across disparate platforms and architectures while still providing - * near-native levels of performance on each conforming - * implementation, it is expected that the ODP architecture and the - * APIs presented here will evolve based on the experience in - * implementing and tuning ODP for operation on multiple platforms. - * For the time being, then, the goal here is not so much as to - * present a stable API, but rather a usable one that can be built - * upon to reach a clearly defined end goal. - * - * ODP releases will follow a standard major/minor/revision - * three-level naming designation. The intent is that APIs will be - * stable across major revisions such that existing APIs will work - * unchanged within a major revision, though minor revisions may add - * new APIs. Across major revisions some API changes may make - * application source changes necesary. These will be clearly noted - * in the release notes associated with any given ODP release. - * - * This consistency will commence with the 1.0.0 release of ODP, which - * is expected later in 2014. Pre-release 1 it should be expected - * that minor revisions may require API source changes as ODP is still - * "growing its roots". This is release 0.1.0 of ODP and is being - * made available as a "public preview" to the open source community - * for comment/feedback/evaluation. - * - * @section contact Contact Details - * - The main web site is http://www.opendataplane.org/ - * - The git repo is https://git.linaro.org/lng/odp.git - * - Bug tracking https://launchpad.net/linaro-odp - * - * - * @section sec_2 User guide - * - * @subsection sub2_1 The ODP API - * - * This file (odp.h) is the main ODP API file. User should include only this - * file to keep portability since structure and naming of sub header files - * may be change between implementations. - * - * @subsection sub2_2 Threading - * - * ODP does not specify a threading model. Applications can use - * processes or pthreads, or Roll-Your-Own (RYO) threading/fibre - * mechanisms for multi-threading as needed. Creation and control of - * threads is the responsibility of the ODP application. For optimal - * performance on many-core SoCs, it is recommended that threads be - * run on dedicated cores. ODP provides high-level APIs for core - * enumeration and assignment while the corresponding ODP - * implementation layer provides the appropriate mechanisms to realize - * these functions. - * - * Threads used for ODP processing should be pinned into separate cores. - * Commonly these threads process packets in a run-to-completion loop. - * Application should avoid blocking threads used for ODP processing, - * since it may cause blocking on other threads/cores. - * - * @subsection sub2_3 ODP initialisation - * - * Before calling any other ODP API functions, ODP library must be - * initialised by calling odp_init_global() once and odp_init_local() - * on each of the cores sharing the same ODP environment (instance). - * - * @subsection sub2_4 API Categories - * - * APIs provided by ODP cover the following areas: - * - * @subsubsection memory_management Memory Management - * - * This includes macros and other APIs to control memory alignments - * of data structures as well as allocation/deallocation services - * for ODP-managed objects. Note that ODP does not wrapper malloc() - * or similar platform specific APIs for the sake of wrappering. - * - * @subsubsection buffer_management Buffer Management - * - * This includes APIs for defining and managing buffer pools used - * for packets and other bulk purposes. Note that the allocation - * and release of buffers from buffer pools is not something done - * explicitly by ODP applications, but rather by APIs that use these - * buffers. This is because in most SoCs, actual buffer allocation - * and release is accelerated and performed by hardware. Software's - * role in buffer management is normally reserved to allocating - * large chunks of memory which are then given to hardware for - * automatic management as pools of buffers. In this way the ODP - * application operates independent of how buffers are managed by - * the underlying ODP implementation. - * - * @subsubsection packet_management Packet Management - * - * This includes APIs and accessor functions for packet descriptors - * as well as packet receipt and transmission. - * - * @subsubsection syncronisation Synchronization - * - * This includes APIs and related functions for synchronization - * involving other ODP APIs, such as barriers and related atomics. - * Again, as ODP does not specify a threading model applications - * make use whatever synchronization primitives are native to the - * model they use. - * - * @subsubsection core_enumeration Core Enumeration and managment - * - * This includes APIs to allow applications to enumerate and - * reference cores and per-core data structures. - * - * @subsection sub2_5 Miscellaneous Facilities - * - * ODP includes miscellaneous facilities for compiler hints and - * optimizations common in GCC. [Not sure if we want to consider - * these an "API" per se]. - * - * @subsection sub2_6 Application Programming Model - * - * ODP supports applications that execute using a "run to completion" - * programming model. This means that once dispatched, application - * threads are not interrupted by the kernel or other scheduling - * entity. - * - * Application threads receive work requests as \a events that are - * delivered on application and/or implementation defined - * \a queues. ODP application code would thus normally be - * structured as follows: - * - * @code - * #include - * ...other needed #includes - * - * int main (int argc, char *argv[]) - * { - * ...application-specific initialization - * odp_init_global(); - * - * ...launch threads - * ...wait for threads to terminate - * } - * - * void worker_thread (parameters) - * { - * odp_init_local(); - * - * while (1) { - * do_work(get_work()); // Replace with ODP calls when defined - * } - * - * } - * @endcode - * - * Events are receved on input queues and are processed until they are - * placed on an output queue of some sort. The thread then gets the - * next event to be processed from an input queue and repeats the - * process. - * - * @subsection sub3_1 Asynchronous Operations - * - * Note that work to be performed by a thread may require access to an - * asynchronous function that takes a significant amount of time to - * complete. In such cases the event is forwarded to another worker - * thread or hardware accelerator, depending on the implementation, by - * placing it on anothert queue, which is an output queue of the - * thread making the request. This event in turn is received and - * processed by the thread/accelerator that handles it via its input - * queue. When this aysynchronous event is complete, the event is - * placed on the handler's output queue, which feeds back to the - * original requestor's input queue. When the requesting thread next - * receives this event it resumes processing of the event following - * the asynchronous event and works on it either until it is ready for - * final disposition, or until another asynchronous operation is - * required to process the event. - * - * @subsection sub3_2 Queue Linkages - * - * The mapping of input and output queues that connect worker threads - * to accelerators and related offload functions is a cooperation - * between the implementation and the ODP application. The - * implementation defines the service funtions that are available to - * worker threads (e.g., cypto offload services) and as part of that - * definition defines the queue structure that connects requests to - * those services as well as the outputs from those services that - * connect back to the requesting workers. The ODP application, in - * turn, defines the number of worker threads and how they cooperate - * among themselves. Note that the application may use ODP core - * enumeration APIs to decide how many such worker threads should be - * deployed. - * - * @subsection sub3_3 Packet I/O - * - * In ODP packet I/O is implicit by reading from and writing to queues - * associated with interfaces. An ODP application receives packets by - * dequeuing an event from an input queue associated with an I/O - * interface. This either triggers a packet read or (more likely) - * simply provides the next (queued) packet from the associated - * interface. The actual mechanism used to effect the receipt of the - * packet is left to the ODP implementation and may involve any - * combination of sofware and/or hardware operations. - * - * Similarly, packet transmission is performed by writing a packet to - * an output queue associated with an I/O interface. Again, this - * schedules the packet for output using some combination of software - * and/or hardware as determined by the implementation. ODP applications - * themselves, therefore, are freed from the details of how packet I/O - * is performed or buffered to minimize latencies. The latter is the - * concern of the ODP implementation to achieve optimal results for - * the platform supporting the implementation. - * - * @subsection How to Use this Reference - * - * This reference provides an overview of each data structure and API - * function, along with a graphical representation of the various - * structural dependencies among them. When using the HTML version of - * this reference, all links are dynamic and provide access to the - * underlying implementation source files as well, thus providing both - * a ready reference to API parameters and syntax, as well as - * convenient access to the actual implementation behind them to - * further programmer understandng. - */ - #ifndef ODP_H_ #define ODP_H_ diff --git a/platform/linux-generic/Doxyfile.in b/platform/linux-generic/Doxyfile.in index 661924b..572a3dd 100644 --- a/platform/linux-generic/Doxyfile.in +++ b/platform/linux-generic/Doxyfile.in @@ -9,8 +9,8 @@ TYPEDEF_HIDES_STRUCT = YES EXTRACT_STATIC = YES SORT_MEMBER_DOCS = NO WARN_NO_PARAMDOC = YES -INPUT = ../../include ../../test -FILE_PATTERNS = odp*.h odp*.c +INPUT = ../../doc ../../include ../../test +FILE_PATTERNS = odp*.h odp*.c *.dox RECURSIVE = YES SOURCE_BROWSER = YES REFERENCED_BY_RELATION = YES